A moral quandary used as an age-old teaching device in psychology and politics can be turned on its head, posing a question back to universities.
The Prisoners’ Dilemma is a simple conundrum which routinely posed to students studying psychology. Two “prisoners” must decide whether to remain silent, or to betray their co-prisoner, in exchange for their freedom. The problem is generally presented in a matrix, as shown below. If both prisoners remain silent then they will ultimately get the least harsh cumulative sentence, say at three years each. If one prisoner betrays another, one prisoner may get ten years, whilst the other gets off freely, and finally, if both prisoners choose to betray one another, they both are given a sentencing of five years.
Prisoner A (Says Something) | Prisoner A (Says Nothing) | |
Prisoner B (Says Something) | Prisoner A and B get 5 years | Prisoner A gets 10 years, Prisoner B gets 0 years |
Prisoner B (Says Nothing) | Prisoner A gets 0 years, Prisoner B gets 10 years | Prisoner A and B get 3 years |
Students of politics are also often taught of a variation of the Prisoners’ Dilemma, in the context of international cooperation. For example, in an arms control arrangement, if two states engage in arms control, and only one chooses to implement its measures, it is at a significant disadvantage, relative to the other state. Therefore, it is rare that in this theoretical vacuum, states will opt to both partake in a mutually limiting agreement, that requires cooperation, without necessary incentive or external pressure.
Now, I believe it is time that we recontextualise the Prisoner’s Dilemma to the modern student. Chat-GPT has taken the university sector by storm, with few universities effectively limiting its use, let alone implementing the necessary measures to prevent students from using this seemingly all-knowing entity, allowing unscrupulous students achieve near-perfect scores.
The measures designed to stop this cheating seem to satisfy universities, but in reality seem to provide only a cloak of additional protection to the AI-enhanced students – rendering them invisible to institutions keen to pretend everything is under control.
So, how can we expect a student to act in the face of this new technology? Given that it is common knowledge that this software is not only universally available, but readily used as well. A student may choose to not use AI; however, they have to either hope that their fellow classmates choose to adhere to their universities’ rarely enforced ‘academic integrity’ or that their own capacity is sufficient.
In addition, students must wrestle with the knowledge that they are graded on a bell curve. Their own score, which is unaffected by AI, will be adjusted relative to those who utilise AI to the fullest of its capacity.
Given that two students have a similar comprehension of course material, I propose the following dilemma:
Student A (uses AI) | Student A (doesn’t use AI) | |
Student B (Uses AI) | Equal academic playing field | Disadvantage to student A |
Student B (doesn’t use AI) | Disadvantage to student B | Equal academic playing field |
In this current university environment, where consequences are meagre, if any, where quizzes and exams are being made more difficult in the anticipation that some students will use AI, and where AI is such an overwhelming advantage, is it any surprise that students so frequently use AI, where there is such a clear disadvantage when you do not?
Universities can try to equalise the playing field by completely banning AI – but usage by students is already so widespread and universities are failing so spectacularly in their current attempts to control AI usage, that this possibility seems impossible. Which leaves only the options of accepting that honest students will get lower grades, or taking new measures to enable all students to use AI equally.
Monty Winkler is a 4th year student at ANU, studying Bachelor of Psychology and Bachelor of International Relations.