
Opinion
Australia’s universities are facing a moment of reckoning. Artificial intelligence may help them rebuild public trust — or be the force that exposes how fragile that trust has become.
Their financial models have been tested, their governance questioned, and their social licence — that tacit bond of trust between the academy and society — is under strain. Years of scrutiny, some well-founded and some less so, has shaken confidence in the sector’s moral centre.
Yet the disruption of artificial intelligence could be the shock that restores clarity of purpose. Far from simply another technological wave, AI might act as a mirror, reflecting back to universities the question of what — and whom — they truly serve. Used wisely, it can become a catalyst for renewal: an opportunity to rebuild legitimacy through human capability, inclusion, and ethical practice.
This is the territory long described as the Third Mission — the work universities do beyond teaching and research to strengthen communities, foster innovation, and sustain democratic life. Once treated as a worthy add-on, it has become the crucible in which public trust can be rebuilt. The Third Mission is where a university’s values are most visible.
Across Australia, institutions are integrating AI into this civic space, experimenting with ways to translate digital capacity into social intelligence — aligning technology with purpose and public value. At Monash, the Data Futures Institute coordinates projects using AI “as a force for social good”. UNSW’s AI Institute combines research with public dialogue, led by ethicist Toby Walsh. The University of Melbourne’s Centre for AI and Digital Ethics embeds moral reasoning into technology design and policy, while ANU’s School of Cybernetics explores human-centred innovation. La Trobe’s AI for the Future Institute applies data science to regional and health challenges — from early disease detection and aged-care robotics to sustainable land management — demonstrating how civic AI can serve communities beyond metropolitan centres. RMIT’s City North precinct applies AI to urban and social challenges, and UTS is using it to close learning gaps and advance digital inclusion.
Individually these initiatives differ in scope, but together they reveal a sector reasserting its civic instincts. Each expresses a deeper awareness of agency and stewardship — the understanding that universities must not only develop AI but decide what kind of moral intelligence they wish to exercise in a digital world. This sensibility echoes ideas we have been exploring through the Soul of Higher Education Community of Practice, supported by the selfdriven Foundation, which examines how institutions can act with situational awareness, purpose, and trust. It also aligns with what is increasingly described as “civic AI” — technology guided by values, not velocity, and directed toward social rather than purely commercial ends.
Four Pathways in a Renewed Mission
As universities seek to re-establish trust and redefine their public purpose, four broad currents are emerging in how they use AI to serve society. These pathways are not discrete programs but overlapping expressions of the same intent — to reconnect technological innovation with human capability, civic inclusion, and ethical leadership. Together they show how the Third Mission is being reshaped in the AI age.
1. Community engagement and civic development
AI is being mobilised to strengthen communities rather than distance them. Swinburne’s Digital Inclusion programs, Charles Sturt’s research on AI in rural social services, and JCU’s modelling to protect the Great Barrier Reef all show a new ethic of co-design — technology informed by local knowledge and lived experience. Universities are becoming facilitators of community-centred innovation, recognising that the social licence of AI depends on how inclusive it feels to those most at risk of exclusion.
2. Public education and AI literacy
AI literacy is fast becoming a public good — and universities are its natural interpreters. Through open seminars, explainer videos and public forums, universities such as UNSW, Melbourne and Adelaide are demystifying AI and its ethics. In an era of deepfakes and misinformation, they act as translators between technical expertise and civic understanding — helping citizens think critically, not merely react fearfully, to technological change.
3. Knowledge exchange and innovation ecosystems
The economic and social dimensions of engagement are converging. Innovation precincts such as Melbourne Connect, Lot Fourteen in Adelaide and Tech Central in Sydney position universities as anchors in collaborative ecosystems where researchers, start-ups and communities work together. The emphasis is on shared innovation — knowledge circulating openly, with benefits distributed across industry, government and community alike.
Universities are reclaiming their traditional role as moral stewards of knowledge. Centres such as Melbourne’s CAIDE, UNSW’s Allens Hub and UQ’s Centre for Policy Futures embed fairness, transparency and human values into AI design and regulation. When generative AI exploded in 2023, it was universities that convened the first serious national conversations, choosing reflection over reaction. This ethic — engagement before exploitation — signals a maturing understanding of what responsible innovation requires.
Together, these pathways signal a transformation. AI is no longer peripheral to the Third Mission; it is becoming the means through which universities rediscover purpose. It provides a lens through which institutions can see what they truly value — and whether those values hold under pressure.
The New Tensions of Trust
Yet the same tools that offer renewal also expose contradiction. AI tests the moral infrastructure of universities as much as their technical capacity — and the renewal of social licence will depend on how these tensions are managed.
Digital exclusion remains the sharpest fracture. According to the Australian Digital Inclusion Index, about one in five Australians still struggle to access, afford and fully use digital technologies, with gaps concentrated by remoteness, disadvantage and age. Australia’s AI ambitions must therefore be matched by deliberate investment in inclusion — otherwise those who could benefit most risk being left behind.
Automation versus human contact is a subtler but equally critical tension. Chatbots and digital assistants can expand outreach, but empathy cannot be automated. Emerging research in psychology and education warns that over-reliance on conversational AI can foster dependency and social withdrawal, particularly among those who already find human interaction difficult. The challenge for universities is to use AI to extend human reach without eroding human touch — to balance scale with soul.
Ethical and reputational risk shadows every collaboration. As universities partner with government, Big Tech or defence industries, they must ensure that public values are not eclipsed by commercial ones. Mission drift — where civic rhetoric conceals market intent — remains a temptation. The cost of a single ethical failure can be the continued erosion of public trust that took generations to build.
Finally, there is trust itself. AI blurs the boundary between expertise and imitation, between informed judgment and synthetic authority. Universities must demonstrate that their use of AI is transparent, accountable and human-led. Policies alone will not suffice; credibility will rest on practice, openness and humility.
These tensions are not flaws but signs of maturity. They mark a sector wrestling, sometimes uncomfortably, with what it means to act responsibly in an age of automation. As Universities Australia observed in its 2024 submission to the Senate inquiry on Adopting Artificial Intelligence, universities should retain autonomy in how they adopt and govern AI — but that autonomy carries a responsibility to ensure these tools are used appropriately and ethically.
Redefining the Social Licence — Renewing the Social Contract
AI is prompting Australian universities to revisit a question that has always defined them: what does it mean to serve the public good? The traditional answers — teaching, research and economic contribution — remain vital, but no longer sufficient on their own. The sector now faces an opportunity, and an obligation, to use its intellectual and technological power with renewed purpose: to make society more equitable, informed and humane.
Handled wisely, AI could revitalise the Third Mission. It could expand participation, democratise knowledge and restore faith in the university as a civic institution. Mishandled, it could accelerate the erosion of trust that already haunts the sector.
The deeper question, then, is not whether universities can harness AI, but whether they can stay true to human purpose — rediscovering, in this age of intelligence, what it means to act with conscious purpose and civic responsibility.