
Universities are working on ways to stop AI upending assessment as students let the tech do the writing for them – but what happens if there is no one way to do it; if AI is a classic “wicked problem,” without a single solution?
Thomas Corbin and Deakin U colleagues argue that is exactly what it is, and there are only better or worse, rather than correct, ways of dealing with it.
They analyse interviews with 20 assessment designers at an unnamed Australian university, which demonstrates the AI challenge for academics that starts with it meeting all ten criteria of a wicked problem, including:
- No agreement on what needs fixing.
- No way of agreeing the problem is solved because of the lack of agreement on what must be fixed.
- It’s all relative, innit! “Technical problems have correct answers that can be verified. Wicked problems, on the other hand, have only trade-offs, where every response sacrifices something valuable.”
- No agreed metrics to show a solution worked.
- Continual testing of solutions until one is found that fixes everything, because each has consequences.
- Multiple ways to try to solve problems – plus, there are as many different problems as contexts they apply in.
- “Wicked problems do not exist in isolation but instead emerge from and reveal deeper structural issues”
- Defining a problem sets the context for deciding what solutions to try, but disguises others.
- And then there is the really wicked problem, with wicked problems, “those who present solutions … have no right to be wrong.”
- “Decision-makers bear full responsibility for the consequences of their choices”
So, what can be done? The authors recommend accepting the absence of a universal answer and “make continuous professional judgments in conditions of permanent uncertainty.”
(This) “shifts the role of educators from implementing fixed solutions to engaging in a continual search for better, context-sensitive designs that respond to local needs, disciplinary values, and the evolving presence of AI in student learning,” they suggest.
And they propose three contexts:
- “Permission to compromise”: accept that no solution does everything, “and put down the “toxic burden of pursuing perfection that cannot exist.”
- “Permission to diverge:” Context is all; what will work for a 20 philosophy students, won’t with 250 in bized. “When we stop seeking perfect solutions, we can start having honest conversations about which trade-offs serve our students best, which failures taught us most, and how to be thoughtfully imperfect rather than accidentally inadequate.”
- “Permission to iterate:” “when AI capabilities transform monthly, when student behaviours shift each semester, and when professional requirements evolve constantly, the result can be that educators design assessments for yesterday’s technology, implemented with today’s students, preparing for tomorrow’s unknowns.”
The take-away: “universities that continue to chase the elusive ‘right answer’ to AI in assessment will exhaust their educators while failing their students. Those that embrace the wicked nature of this problem can build cultures that support thoughtful professional judgment rather than punish imperfect solutions.”