
Debates about the ways universities can assure learning in an era of AI are timely and important. One thread of this debate centres on learning within online environments, arguing online programs lack the relational depth and trustworthy evidence needed for defensible academic judgement. Many of the concerns raised deserve attention. Students do need opportunities to demonstrate how they think, not just what they submit, and institutions do need clearer program-level evidence that graduates have met the outcomes we claim.
However, it would be a mistake to treat these challenges as distinctive to online learning. Across Australian universities, a substantial proportion of on-campus students attend campus irregularly due to a range of competing priorities, including work, caring responsibilities and long commutes. Staff often rely on the same take-home, text-based tasks used in online programs. AI challenges all modalities, not only digital ones, because it changes how learning must be evidenced and observed.
The distinction that matters now isn’t physical versus online delivery; it’s whether a learning environment is deliberately designed to generate valid, observable evidence of students’ capability.
Our work on behalf of our five Australian university partners takes that design challenge seriously. Drawing on TEQSA guidance, pilot testing and the emerging literature, including Dollinger et al’s. (2025) briefing paper regarding assurance of learning in fully online credentialed programs, we’ve developed a connected framework of assurance that operates across three dimensions: relational, technical and pedagogical. The aim is to strengthen assessment validity and address integrity risks while protecting the flexibility and access that students value when learning online. Through engagement with sector experts, collaboration with our partner universities, and consideration of the regulatory environment, we have developed a system-ready model that translates policy intent into executable practice.
1. Relational: making students knowable as thinkers
One argument put forward is that “online learners progress without meaningful educator–student connection, limiting confident judgement”. We agree that relational knowledge matters; educators make stronger inferences when they’ve heard students explain choices, test ideas and talk through their reasoning. But this isn’t guaranteed by physical proximity. Large lecture formats, optional attendance and asynchronous submission patterns mean many on-campus educators assess students they’ve barely met.
Our response is to design relational contact into online programs rather than hoping it emerges. We’re piloting one-to-one and small-group check-ins with teaching staff, separate from formal assessment, where students can articulate their thinking in low-stakes settings and staff can develop a sense of each student’s academic voice. We’re
also introducing secure oral assessments supported by identity verification and controlled environments, enabling educators to engage directly with students’ reasoning and probe for depth of understanding. These relational approaches sit alongside placements in relevant programs, which offer another authentic, observable site where learning can be demonstrated.
Rather than a deficit, the flexibility of this online delivery allows every student to have structured interaction points, not only those who can attend campus at set times.
2. Technical: strengthening validity in an AI world
It is right to flag that AI raises new risks for assessment security, particularly in online contexts. Two concerns dominate current sector conversations: the possibility of deepfake-style identity spoofing and the ease with which students can access AI tools during assessment. Our technical assurance layer, designed for synchronous oral assessments, tackles both issues directly.
Identity verification using government records and databases combined with liveness testing, confirms that the person present is the enrolled student, not a manipulated recording or AI-generated representation. Once inside the assessment environment, browser lockdown restricts access to unapproved digital tools while allowing materials that are legitimately part of the task. Together, these controls create an authenticated, bounded setting for real-time assessment. Rapidly innovating technology will mean that this technical dimension isn't a 'silver bullet' and will need to evolve alongside advances in AI. The same challenge applies to in-person secure environments, particularly with the emergence of wearable technologies.
Recordings of these formal learning touchpoints will support post-hoc integrity reviews where needed, while scheduling software takes the administrative load, allowing for assurance that is scalable. Proctored exams and placements continue to provide secure opportunities to generate learning evidence in relevant programs, complementing relational assessments such as oral examinations.
When designed with these controls, online environments can provide a more transparent and auditable record of performance than many physical settings. This works to reduce risk rather than adding uncertainty and sets up conditions where assessment at a physical location is not necessary.
3. Pedagogical: building cumulative, program-level evidence
The challenges of AI mean assurance of learning can no longer rest on isolated units or single artefacts. Our learning design and teaching teams are working with partners to implement secure assessments and program-level redesign – this is the foundation of credible assurance. We are advocates of Danny Liu and Adam Bridgeman’s two-lane model for
its security, validity and clear expectation for students, and believe that building a coherent body of evidence across a degree is essential for all programs, whether delivered online or on campus.
A key strength of online delivery is the volume and quality of observable data generated through students’ interactions with the LMS, where formal learning materials and activities are hosted. Where we currently track this engagement for retention initiatives, we can also use it as supporting evidence for learning assurance. AI tools, such as the virtual tutor that we’ve developed, have the potential to be another rich data source alongside non-academic support staff such as our student coaches and advisors who regularly assist our students in developing key academic skills. Across a program, this combination of relational interactions, secure assessments and longitudinal data will help educators triangulate evidence of learning and make more confident judgements than a single submitted artefact can ever provide.
By combining program-level design with this richer evidence base, we can build a cumulative, defensible picture of capability that aligns with contemporary validity expectations.
Conclusion: rigour that doesn’t close doors
The students who rely on online learning – regional and remote learners, carers, mature-age students, people with disability, and those balancing work and study – can no longer be considered an edge case of Australian higher education; they’re at its centre. Protecting the integrity of their programs matters because online learning is the pathway that enables their participation in the first place.
But the solution isn’t to view these students’ circumstances as a barrier to credible assurance. It’s to design assessment systems that give them structured, supported and secure ways to demonstrate their learning without stripping away the flexibility they depend on. This will not be achieved through narrow detection and modality-based controls, but through a connected assurance of learning framework.
By building relational, technical and pedagogical layers of assurance, we can provide all students with inclusive, credible and supportive ways to demonstrate their learning without removing the flexibility contemporary life demands. In that sense, strengthening assurance in online environments isn’t a concession we make for equity students, it’s an opportunity to build models that can strengthen the sector as a whole.
Access the paper here.
Dr Erin Jancauskas is Chief Academic and Partnerships Officer and Amanda Ford is Associate Director, Generative AI at OES.