
Opinion
‘Fake News’, ‘Deep Fakes’, ‘GenAI Hallucinations’ – if there’s one thing (amongst many) that the third decade of the 21st century could be known for, it is the time when we all realised we could no longer believe anything we saw.
It’s often referred to by many names: disinformation/misinformation, confabulation,
bullsh*t, AI slop. Around the RMIT School of Computing Technologies, we’ve taken to referring to it as a ‘post-truth’ society, where every social media item needs to be questioned, news item critiqued, and poster analysed for whether it was generated by AI.
Since 2022, GenAI has created a ‘synthetic epistemology’ – the ability of machines to generate content that sounds human – that has made the simple magnitude of content so much higher. And this boom in generative AI has meant that now text, images, videos and other content can be easily created by machines with little or no human content, raising fears of an information flow that is generated and consumed by machines. As a popular meme would have it, a situation where GenAI can be used to turn a bullet-point summary into a long email the sender can pretend to have written, only for AI at the other end to turn the email into a bullet-point summary that the received can pretend to have read!
How do we deal with such abominable practices? A key first step is for computer scientists and scholars like us to up our game in the area of public education in digital literacy and fluency, so that the general public gains some basic understanding of how all this bedazzling technology actually works, rather than purely focusing on its outcomes.
We don’t have all the answers (nobody does!), but what we can do is to increase understanding of the techniques involved, gain knowledge of their features, and build proficiency in applying them to analyse, interpret and communicate data and information efficiently, accurately and appropriately. Through this we can educate all concerned about some of the more obvious pitfalls and shortcomings. We are seeking to do this with other staff at RMIT by creating an interdisciplinary STEM minor on Digital Innovation and a new subject focused on Digital Fluency.
We need to be specific about what we're asking people to learn. Digital fluency isn't just about spotting a dodgy deepfake or knowing that ChatGPT sometimes makes things up with remarkable confidence. It's a broader disposition: a habit of interrogating sources, understanding incentive structures, and asking who benefits from me believing this? Think of it like learning to drive.
You don't need to know how to rebuild an engine, but you do need to understand that wet roads affect stopping distance, that your mirrors have blind spots, and that the other drivers aren't always paying attention. The same logic applies here. You don't need a computer science degree to navigate a world saturated with synthetic content — but you do need a working mental model of how these systems produce what they produce, and why that matters.
That means teaching people – students, professionals, frankly, anyone who owns a phone – a few key ideas:
- That generative AI is a pattern-matching and prediction engine, not a truth engine;
- That the loudest or most polished content is not necessarily the most accurate; and
- That the friction of verification, while annoying, is democracy's immune system.
So yes – this is a call to arms for our entire academy, academic and professional staff alike, whether computer science focused or not! But it's also an invitation.
If you're a technologist, find a way to share what you know, whether that's a guest lecture, a community workshop, or just being the person in your family who explains why that viral video probably isn't what it claims to be.
Help people understand the post-truth society and what it means. What we need is driving by the experts. Computer Scientists know how to think critically, we know how the machines work and what they do, and are well placed to explain how to find the truth.
And if you're on the other side of that divide, curious but uncertain where to start — that's exactly why you should reach out to your computer science colleagues. Come as you are. Bring your scepticism. That’s precisely the point. The good news? This is teachable. And it's not nearly as dry as it sounds, once you connect it to things people already care about — their jobs, their health decisions, their political choices. We might be post-truth, but that doesn’t mean the truth can’t be found.
Professor Michael A Cowling, Dr Shekhar Kalra and Professor James Harland are from RMIT University.