
A survey of AI detection tools and research on the use of the tools has found that in many cases, the tools are so flawed that students can use AI to avoid learning, and academic staff cannot take effective action.
Charles Sturt University Associate Professor Mark A. Bassett has produced a paper arguing that AI text detectors in education are fundamentally unfit for purpose, with misleading metrics and unverifiable outputs.
“Their use is often procedurally unfair,” A/Prof Bassett states.
“The paper highlights the risk of confirmation bias, arbitrary suspicion, and inequitable impacts, especially on students without access to high-end AI tools. I believe it is time we abandoned reliance on flawed detection tools and instead reform assessments to make unauthorised AI use less viable.”
The paper pulls together insights from a range of research analysing AI detection tools, finding that deficiencies in current approaches can be inequitable to students. A/Prof Bassett points out that comparisons of students’ work is often based on flawed or dated models of what student work should look like without AI
Feeding a student’s work into a GenAI tool may breach the student's IP rights and relies on staff trusting denials or verification of authorship by the tools – effectively putting AI in charge of policing itself with some human intermediaries.
“Getting hit by a car when crossing the road after reading a horoscope that warned you to “tread carefully today” doesn’t legitimise astrology. Likewise, when a student admits to using GenAI, it doesn't corroborate the results of an AI detector.”
“Simply telling a student that an AI detector has flagged their work as partially or entirely AI-written is problematic. Students don’t know the litany of issues outlined above and likely think that a positive AI detector result constitutes evidence against them (it does not).
“Doesn’t this all mean that, in many cases, students can use AI to avoid learning, and we can’t take action? Yes.
“This shouldn’t come as a surprise. It’s the underlying reason why Australia’s Higher Education regulator, the Tertiary Education, Quality and Standards Agency (TEQSA), issued a Request for Information (RFI) to all Australian Higher Education providers asking them how they are responding to the risks (and opportunities) posed by GenAI. If AI detectors worked, it’s arguable that TEQSA would never have issued an RFI.”