AI forces assessment of process

woman standing writing on black chalkboard

By STEPHEN MATCHETT

There is a problem with giving students rules on using AI – they get ignored.

Thomas Corbin and colleagues have a solution to this “enforcement illusion;” to build rules into “the underlying mechanics of the assessment tasks.”

They make the case for getting under the assessment hood with a sceptical look at three categories of existing AI regulation models:

  1. Traffic-lights: “explicitly labelling permitted and restricted AI engagement levels”
  2. AI assessment scale: “a five-level progression that enables educators to align AI permissions with students’ evolving skills and needs” – more nuanced than the “no,” “limited use,” “go” approach
  3. Declare use: Universities such as Sydney and Melbourne allows students to use AI, if they admit it – not disclosing is classed as potential misconduct.

The problem, the authors suggest, is that all three models rely on students understanding and doing what they are told.

The alternative they propose is building compliance into the design of assessment – for example, supervising the generation of parts of a take-home essay, asking random questions in an interactive assessment of a multiple choice online quiz or tutor sign-off in live assessment of a lab report.

It is about “reorienting assessment from output to process.”

“Rather than evaluating only the final product, which could potentially be AI-generated, assessment may be designed to capture the student’s development and attainment of understanding and skill over time,” the authors state.

And they make the case; “when assessment validity hinges on student compliance with unenforceable rules rather than an inherent system design we build our educational systems on foundations of sand.”

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!