
AI detection tools are unfair and mean institutions avoid what they must, “rethink” assessment design, academic integrity and assessment as artificial intelligence technologies continue to develop.
Mark Andrew Bassett (Charles Sturt U) and colleagues argue in a new paper that AI detection tools do not necessarily work and also impose a “false dichotomy” in assessing work as either human or tech creation, which “ignores work created with, not by, AI.”
They set out multiple failings that make tools inevitably unreliable, including;
- linguistic models that produce probable results of authorship but cannot be independently verified
- linguistic markers of human and AI text that do not exclude each other, “there is no principled reason to believe that a human cannot produce writing that contains linguistic features commonly found in AI-generated text. Indeed, AI writing is a creation of the human prose it was trained on.”
- conformation bias: “markers search for evidence that confirms use of AI, while overlooking counterexamples, such as personalised elements or assignment-specific context”
- using AI to detect AI: large language models “lack the capacity to analyse authorship beyond pattern recognition”
They also address flaws in the processes universities use to allege and assess academic misconduct, especially given the way AI is changing assessment. “If students are permitted to use AI outside of assessment but not within it, enforcement depends on identifying a precise threshold that, in practice, remains undefined, they write.
And they propose less suppression and more transformative adoption. “Institutions must accept that AI detection is an unworkable solution to a problem that cannot be solved through surveillance and punishment. The focus must move from detection and enforcement to assessment design that recognises AI’s role in learning.”
The paper is a practical example of Thomas Corbin and colleagues’ proposal to accept AI in assessment as a “wicked problem,” that there are better or worse ways to work use it, but no single solution. As Corbin’s team put it, “universities that continue to chase the elusive ‘right answer’ to AI in assessment will exhaust their educators while failing their students.”