
I’m hearing concerns from students about peers cheating in the age of AI.
But it’s not just AI, a series of events combined to challenge us.
First moving from elite to mass education (hooray – but it was a big change); then the internet making information available everywhere; next the idea of assessing online and this becoming universal during COVID; then finally AI.
Moving to mass education was a huge thing.
When only the privileged few went to university (and some rigorously selected poorer scholarship students) things were different. Cheating seemed less of an issue. The rich could just drop out rather than cheating! And it was harder to find someone to cheat from. If you were in a small class studying something very specific, how could you find someone better placed than you (and who looked like you) to do your old-fashioned exams or vivas!
Shift forward a few decades.
Let’s say 50% of school leavers go to university. Some are more prepared than others. Some can devote more time to university, than others. The cost of mass education is high, so students are now paying partly for themselves. Many can’t afford to pay for a subject twice. If you are under huge pressure to pass, there may be many around you who might ‘help’ you cheat.
When the internet arrived, those available to ‘help’ became innumerable.
Software, like Turnitin, evolved to detect internet-enabled plagiarism. But then international contract cheating companies popped up. It was suddenly easy to outsource your assignments and circumvent plagiarism detection tools.
Universities were swinging into action on this, when…COVID struck.
All assessments went online. Not because that was ideal, but because there was no choice. It worked well enough. But this apparent success hid the fact that it hadn’t really worked. Academics found evidence of cheating and students complained some peers were cheating.
Then ChatGPT arrived.
It stunned many of us. Software was created to detect its use, but counter tools evolved. Detection is no longer reliable.
AI keeps moving and yes, we need to teach our students how to use it. But that narrative – of embracing AI – risks distracting us from the fact that AI can also compromise the integrity of assessments that were designed before it existed.
TEQSA reminded us we have a responsibility to assure learning outcomes. Our students want us to fight cheating. So do academics. So does society.
I see three solutions being worked through.
The first boils down to assessing learning “all the time”.
Watch the process of learning in the classroom, give regular supervised quizzes or vivas, watch students as they do practical projects or develop assignments in front of you. This works but can be stressful and requires inflexible time commitments for teachers, and students. Ouch.
The second idea is to go back to old-fashioned supervised exams at the end of a period of learning. Ouch, some people don’t like exams. Some say students can beat the security anyway – with a note hidden in their sock, but that argument is weak as socks are pretty small compared to the internet!
Invigilated exams remain one good default option for countering AI, contract cheating, internet-enabled plagiarism, and if done properly even identity fraud.
A third idea is to design assessment tasks so cleverly that cheating is impractical. Many academics talk about this, but it is important they articulate their solutions clearly. At present most methods are not being communicated widely. I hope this will change.
So where do we stand?
Students and society accept that supervised exams and vivas are options and can be used for grading.
But suddenly take-home or online tasks (unless cleverly designed) can no longer be relied upon for marks because they can be outsourced to AI, to contract cheating companies, or to friends. We can still use them as non-graded hurdles to keep students on track, because if students cheat their way over hurdles, then they truly will only be cheating themselves, since they will not be prepared for the supervised assessments.
It's like learning to drive. The learners’ permit is the supervised HSC exam. Then you do 100 hours of practice with a logbook. You don’t get marks, and if you put your self-driving car on auto-drive every day, you will be in trouble when you face the supervised exam.
But let’s imagine that we optimistically remove that final supervised driving test because it might be stressful and you go for a ‘trust based’ system for the sake of student well-being and ease. You will get the opposite!
First a few students, perhaps those facing unusual hardship, would feel they needed to cheat. Others seeing this, might conclude that since ‘everyone is cheating with AI’ they have to cheat too. Eventually some students would inform on others, and painful accusations, investigations, and punishments would ensue.
And cars would literally crash.
By putting temptation in the way of young people, we’d risk a culture of suspicion between students and staff and society, and an arms race of policing.
But all this can be avoided. We just have to ensure that, however we do it, students face assessments that are not just secure but are seen to be ‘virtually’ impregnable to cheating in the modern age. Exams are one obvious default but I welcome a debate about other possibilities.
Professor Merlin Crossley is DVC (Academic Quality) at UNSW.