When Predictability Meets Disruption

a beach with trees and mountains

​While I have not always agreed with his conclusions, Andrew Norton has long been one of the sharpest analysts of Australia’s higher education system. When he warns that universities risk “stranded resources” under the new, “enhanced” compact regime, it is worth paying attention. His concerns, put plainly, are that mission-based compacts could lock institutions into enrolment profiles, course mixes and growth settings that date quickly.

It’s a sobering thought. Universities are being asked to fix their gaze on a 2027 horizon, when the new system will be fully operational under the Australian Tertiary Education Commission (ATEC). Yet the ground beneath the sector is shifting faster than any funding or regulatory cycle can manage. Nowhere is this mismatch clearer than with generative Artificial Intelligence.

Compacts appear to serve two purposes. First, they assure government that universities can deliver predictable returns on public investment. Second, they are instruments of government steering that, if well designed, can contribute to nation-building and the attainment of national goals. By tying Commonwealth Supported Places, growth allowances and performance expectations to the compact, Canberra shapes not only how universities account for themselves but also the boundaries of what they can become.

The problem is that predictability and disruption rarely mix.

If the Government’s steering signals misalign with how AI reshapes demand, universities may be directed — by policy design — into course profiles and investment pathways that rapidly age. A mechanism built to ensure reliability could, inadvertently, increase vulnerability to existential shocks.

I recently shared in Future Campus some work regarding AI and Strategy (with technologist Mark Byers), and AI and Risk Management. From this work it became clear that too many institutions, if they do consider AI through these mechanisms, treat it as either an operational threat or a useful tool while not also considering its transformative force on them and wider society. Universities appear eager to domesticate AI — to wrangle its integrity challenges and incorporate it into broader “digital uplift” designed to improve learning platforms or streamline service delivery. And in risk, what rarely appears to surface are objective-level exposures.

It appears that institutional frameworks reward what can be controlled over what might be consequential — managing what is measurable, forgetting what matters most.

In both strategy and risk, what appears missing is the harder admission that AI could devalue degrees, reshape demand, or cannibalise research value among other consequences. AI is still seen as a widget not a wedge.

This pattern extends to how institutions have approached the compact process itself. An examination of the 43 Australian University Mission-Based Compacts lodged with the Commonwealth in 2024 tells a similar story.

At first glance, what is most interesting is that only 22 institutions even mention AI explicitly when addressing their overall mission and strategic planning, and their specific strategies related to; improving equality and opportunity; teaching and learning; research, research training and innovation, and; engaging with industry. Where these institutions do address AI, for the most part they speak to sophisticated capabilities for AI engagement, including not only the categories mentioned above but their own internal operations/processes. But again, there is little talk of the wedge and its potential disruption.

Of these 22 institutions which explicitly addressed AI, only two speak to the idea that AI has the potential to be an existential disruptor and another spoke of workforce disruption without specific reference to AI.

Moving to those compacts that were silent on AI, why did the other 21 institutions deem it not worthy of consideration in response to the compact's areas of reporting?

Looking ahead to the new compact model, the sector must avoid a regulatory trap that demands comprehensive commitments to obsolete assumptions. A framework designed to provide stability cannot become a source of strategic rigidity precisely when disruption demands institutional agility. Compacts must go beyond assurance; they must be adaptive instruments. Rather than rewarding neat alignment with existing strategy and risk artefacts, they should create space to confront uncertainty constructively.

So what might that look like?

    1. Agility inside the compact. Build structural flexibility into regulatory oversight. This is especially important with the projected three-year cycle. Specify clear triggers for reassessment (e.g., demand shocks, labour-market signals, technology inflection points). Move from static cycles to an adaptive cadence — and clarify who decides when the trigger is pulled.
    2. Enterprise-level ownership. Whole-of-institution approaches to AI’s disruption are essential. Making AI’s threats and opportunities a first-class topic in compact negotiations could help break institutional siloes and force leadership attention across the enterprise and beyond.
    3. Engage with Risk. Design compacts to talks about risk as well as strategy. Capture acceptable volatility thresholds (enrolments, program redesign, delivery costs etc). Tie these to institutional risk appetite so oversight focuses on consequences, not just controls.
    4. Shared risk, shared purpose. If AI is reshaping society, neither government nor universities can carry the uncertainty alone. Why should institutions promise predictability while bearing all the risk? Consider risk-sharing mechanisms — temporary funding floors, flexible profiles, or rewards for demonstrated adaptation capacity — so the system doesn’t punish responsiveness.
    5. Clear governance boundaries. The compact system is not operating in isolation — it intersects with broader reforms including the university governance review. As government control increases through compacts and other Accord reforms, while blame for strategic failures is simultaneously placed on university governance, a concerning tension emerges. What strategic decisions will remain within the purview of University decision-making if compacts increasingly dictate institutional direction? Will a result be more examples of University Councils crossing the management line because they have less responsibility at the strategic level. The compact system needs to clarify these governance boundaries, particularly regarding how institutions can exercise strategic autonomy when confronting existential challenges that require rapid institutional adaptation.
    6. Mandatory scenario planning. Require each compact to model at least two AI disruption scenarios and outline the institutional response — especially how universities would adapt within compact constraints if demand patterns shift quickly. Aggregated, these could inform a sector playbook for ATEC and providers.

Are compacts capable of managing disruption — or only ensuring continuity? How do you hold a transforming system accountable to stable public goals? As the new compact system is finalised, these seem to me to be at least some of the questions that matter.

The aim should be a compact model that acknowledges uncertainty without abandoning accountability: one that measures not just alignment, but adaptive capacity. Without that shift, the sector risks being perfectly aligned — and perfectly unprepared.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!