As an assessment and inclusion researcher, I was piqued by Jason Lodge’s piece likening Generative AI to e-bikes. Not because it was a poor comparison – quite the opposite! – though perhaps an e-scooter might be a more apt analogy, since they are viewed as a nuisance and face bans in some jurisdictions, just like Gen AI.
Let me start by owning that while I was a frequent bicycle user – which started when I was a PhD student on a scholarship looking to save on parking fees – I am not currently much of a rider. As someone who previously enjoyed two-wheeled freedom, an e-bike has been extremely tempting. There is evidence that suggests e-bikes can support those who have less endurance to get out and do some exercise, and to use more active forms of transportation. Could it be the same for Generative AI?
Let’s examine the analogy more closely. They are similar in that it reduces the amount of effort taken to engage in a particular task – one cognitive, one physical. But what if it allows people to start a task or participate in ways that might previous have seemed daunting? Or supports them to engage in that task more frequently? There are different forms of e-bike assistance: pedelec, where you have to pedal for the motor to engage, and throttle, where you can propel the bike independently of pedaling. These might align with using GenAI for editing, revising, or idea generation, in comparison to just copying the entire output from GenAI and passing it off as one’s own. Certainly students are not all engaging in GenAI us in the same way (Future Campus, 12 Nov 2024), and it’s not all about just being lazy (Future Campus, 26 Nov 2024).
What does this mean for higher education? GenAI isn’t an untrammeled good, nor are e-bikes appropriate in all situations (imagine the Tour de France, or the Paris Roubaix, on e-bikes … where’s the fun in that??). Rather, e-assisted tools might help some people gain access to places and spaces, and reducing barriers to learning. Circumstances might have led to vulnerable and minoritized groups to use such tools in desperation. We should therefore be investigating the experiences of those for whom equitable participation might depend on such tools, rather than disparaging anybody who can’t join the pack of MAMILs (middle aged men in lycra) who enjoy a weekly 70km ride.
Further, riding an e-bike requires one to have some skill in steering and balancing, and knowing the road rules to not endanger oneself and others. It’s unclear at the moment how much GenAI supports learning, or results in cognitive atrophy and metacognitive laziness (Fan et al 2024). For some, the e-bike is a necessary part of survival in Australia as a delivery driver, but can also lead to injury and death. So, what are the infrequent but dark downsides of using GenAI for learning? What are the limits of harm that we might accept?
Another important aspect of the analogy is resourcing. E-bikes do cost a fair bit more than the standard bicycle, just like the paid version of GenAI services are espoused to be better than the freely available. Those with superior funding are more likely to be able to use the shiny version of these tools, whilst others will be left behind with what they can scrounge – the hard rubbish bike with a rusty shifter and squeaky brake. What differential impact does this have on the student journey, and graduate destinations?
Authorities are also still scratching their heads about mode separation between pedestrians, bikes, e-bikes, e-scooters, and cars, with a tendency to favour the status quo, and a reluctance to inconvenience car users. There isn’t a single metaphor that describes the relationship between humans and AI in learning (Lodge et al 2023), so who are the car users in this extended analogy? Many safety campaigns also focus on the visibility of cyclists, placing the responsibility back on individuals, rather than installing infrastructure (like separated lanes – as per Danny Liu’s ‘two lanes’ approach) to offer clear opportunities for appropriate and safe use.
Lastly, sometimes e-bikes – and GenAI – still feels like a bit of a niche conversation amongst early adopters and enthusiasts. But both are incredibly important to consider when thinking about our collective future in a sustainable world. So, what needs to be done to translate and broaden awareness and understanding? And what kind of evidence might we need to support these conversations? “Show and tell” also isn’t enough.
We need to go beyond mere claims about the potential benefits and risks of GenAI in education (Jensen et al 2024), and actually stump up rigorous, high-quality research demonstrating what happens in practice and its differential impacts on learning, to inform what should happen next.
Associate Professor Joanna Tai works at the Centre for Research in Assessment and Digital Learning (CRADLE) at Deakin University