TEQSA tackles AI

The Tertiary Education Quality Skills Agency acknowledges that it will have to adapt to Large Language Models (LLMs) that underpin AI products such as ChatGPT, rather than seek to bend them to its’ rules.

“The pace of change will continue to accelerate, and educational legislation, frameworks and institutions will need to become more agile to remain relevant and competitive,” TEQSA states in a submission to the House of Representatives committee inquiry into AI in education.

It seems a switch in stance from legislation requiring ISPs to exclude contract cheating services, which Stephen Colbran, Colin Beer and Michael Cowling argue could cover ChatGPT. Perhaps, as they suggest, because, LLMs can have “legitimate public benefits,” which TEQSA gets, pointing to AI potential in teaching, learning, course design and assessment.

But TEQSA also addresses AI problems on its patch;

  • Research integrity: while the regulator takes more words to state it, TEQSA is alarmed that AI makes things up, pointing in particular to images it generates, which are, “increasingly difficult to detect and can compromise the integrity of research findings.”
  • It warns of expertise-absent AI replacing peer review leading to dodgy research not being picked up prior to publication.
  • Academic integrity: TEQSA goes beyond concerns with cheating to warn that AI can subvert the entire system, “there is a risk of AI systems becoming self-contained and self-referential.” And warns, “if the education system were to shift entirely to a ‘student/AI hybrid’ model, it raises concerns about how future students will acquire the necessary content knowledge to effectively evaluate AI-generated output.”

Having scoped out the challenge, TEQSA assures the committee it is on the case.  “As the technology will only become more powerful, there is a need to focus not solely on the specific capabilities and limitations of the current generative AI tools, but rather on principles and regulations that we want to apply to a context of rapidly enhancing technological capability.”

TEQSA won’t get an argument out of OpenAI on the principle of regulation for “frontier AI,” “highly capable foundation models” with “dangerous capabilities (that) can arise unexpectedly.” The company proposes industry standard setting and “granting enforcement power to supervisory authorities and licensure regimes.”

But who should do the supervising and licensing?

Academics and entrepreneurs combining as Australians for AI Safety propose a national commission to impose safety standards just as the various civil aviation authorities, “give Australians the confidence to fly.”

“We wouldn’t allow aircraft manufacturers to sell planes in Australia without knowing the product is safe, and we wouldn’t excuse a business for being ignorant about the potential harms of its products, so the law should similarly ensure adequate legal responsibility for the harms of AI,” they write to Industry and Science Minister Ed Husic.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!