What a week: the 2nd NSW Higher Education Summit at UNSW, the TEQSA Request for Information on AI and assessment, the Government consultation on Managed Growth and talk of international student caps.
And now that UNSW is partnering with Future Campus, a chance to share insights for those who want to read further.
We had 500 registrants and stellar participants at the Summit: Michael Cowling, Tim Fawns, Kelly Mathews, Danny Liu, Jan McLean, Jason Lodge, Alex Steel, Patsie Polly and Sarah Maddison.
I won’t quote anyone, but I’ll summarise what I learnt.
Perhaps the wisest insight was that sometimes you shouldn’t rush for simple answers. When you realise that all the reflex proposals are not real solutions, you should “hold the open problem”. For many that is where we are with AI in assessment. No solution is perfect.
I love this, but TEQSA doesn’t want us to write that in our report, nor academics, students, or the general public. So, let’s go further.
One major contribution to the debate about AI in assessment is the 2 track model: track 1 is where AI is not allowed or indeed possible and track 2 is where students can access AI.
In the modern age some argue that track 2 is more authentic, since we want our graduates to be able to use AI. But we also need graduates who can operate when AI is not available – when they are addressing a meeting or talking to their boss, or when the internet is down – so we need track 1 too.
But when we say we allow AI, how much AI assistance do we allow? Do we limit it to spellchecking, or prompting for an outline, or do we say, “go for your life”?
Setting too many levels confuses students. Hence, one idea is “all or nothing”.
It gets more interesting now – one suggestion is that if we can’t prevent AI use in an assessment, then we have to allow it.
This simplifies things and avoids a situation where students inadvertently cross the line or are tempted to transgress. I like it.
It avoids the “entrapment” of vulnerable students and pre-empts challenges with imperfect detection, accusations of cheating, and protracted investigations. If you cannot enforce an AI ban in your assessment, then just allow AI. End of story. Wow.
And celebrate, because we want to encourage our students to be able to use AI.
But hold on. This logic pushes us further.
We want to encourage teamwork too. We cannot prevent collusion, so perhaps we should allow that too.
What about contract cheating?
We cannot eliminate that either. But I’m not so sure I’m liking where the logic is taking me.
The system of allowing what you can’t prevent sets up some odd equity issues. Students with the financial resources to purchase the latest AI, social capital (friends, family, connections who can help), or whose loose integrity and wealth means they can buy help via contract cheating would be at an advantage.
And what about the academic who marks the assessment?
How much effort will they put into feedback when they consider the assignment may be mostly the work of AI and other “helpers”? Not much. Especially if most of the papers submitted look roughly the same.
Students may all get marks between 75 and 90. It will sort of be “formative” assessment. Students who farmed it out won’t have learned much.
So back to track 1 (no AI). I think track 1 equals supervised assessments. These assessments will now be the deciders. It will be high stakes now. Are we back to the old days of ‘finals’?
Few people like this. We’ve tried to move on from the stresses of exams, but this logic takes us back.
Perhaps some people can design assessments which don’t require supervision.
One suggestion is to set very specific questions. In high school history I had to write about the history of the school itself. Even today no one could cheat by getting help on that assignment. It taught us “historical methods” but it was so boring. How I longed to study the causes of the Second World War or the fall of Rome. But those topics weren’t chosen because you could get help and cheat on those.
Other ideas like portfolios reflect overall effort, but are an effort.
Then there is programmatic assessment – I think this means supervised assessment at key points (supervised assessments but as few as possible). To me, this is where we’re headed.
It all depends on the discipline, of course. In science, there really is sometimes only one answer. In engineering, if you want to fly then there are a few: balloons, planes, helicopters, rockets. AI will give you the answers. In book writing or song writing there are infinite wonderful answers. AI will throw up random ideas, some of them good.
In which disciplines do we allow or encourage AI? Each discipline has to work it out for themselves and report back. That’s one of the things UNSW has said in response to the TEQSA RFI.
TEQSA was right to ask how we are managing this.
I think the reason many want to ‘hold the problem open’ is due to the fact that one institutional rule doesn’t fit all. But I think we can get there. I’m a believer in 2 tracks but I don’t agree that track 2 – allowing AI – will be the major track for assessment (though it could be for learning). Track 1 assessments (no AI) will be the ones that provide assurance to TEQSA.
I lived through the arrival of the internet and have seen huge advances in computing. In molecular biology, this allowed the sharing and analysis of the genome. With AlphaFold we can now predict the structures of most proteins. In my discipline, it is still necessary for students to know some stuff (I believe in both rote learning and exams, just not too much of either). After learning the basics students will need to know how to use AI to examine the genome and proteins. As speakers at the summit said, we’re managing ChatGPT via track 1 and track 2, but each discipline will work things out and move on as more and more AI comes online.
Provided we are not overwhelmed by scale. Which brings me to my final point. This was also the week of Managed Growth Funding. I used to work with someone who said we would be a university with a million students online. I don’t think so. I have seen the University of Phoenix falter, and much as I admire the Open University, it’s not growing.
So, guess what – I’m fine with managed growth funding. I don’t want caps that take us backwards – that would be a catastrophe that would reverberate for decades and destroy our reputation. But managed growth funding – there’s a lot to like in that. I think we should manage the growth of both our domestic and manage international numbers from here in, without disappointing students who have just accepted their offers for 2025. I hope that’s where we end up.
The more dialogue we have the more likely we are to get there. So I thank Future Campus for sharing this with 10,000 interested people and enabling discussion.
Professor Merlin Crossley is Deputy Vice-Chancellor Academic Quality at UNSW.