Inconsistent AI standards create research mayhem

Journals are advising authors how to use generative AI for papers, but typically advocate their own novel approach. The result is a babel tower, where there are no common standards understood by all.

Giovanni Cacciamani (Uni Southern California) and colleagues scoped academic publishers and scientific journals’ guidance for authors last year – their findings are in the British Medical Journal.

They report variations in what publishers and journals advise authors they can and cannot do with AI, including research, writing, image generation, disclosure and blanket bans on including generative artificial intelligence as an author.  But while almost all journals have guidelines, less than a quarter of publishers do. 

The guidelines are all over the shop. “Substantial heterogeneity was found in guidance on the application of GAI use in academic research and scholarly writing,” is the polite way they put it. Which is not great for researchers. “A lack of clear and standardised recommendations along with frequent updates to guidelines places responsibility on authors to seek out “correct” guidance,” the authors note

In time, research area-specific guidelines will emerge, but for now a “set of broadly encompassing, cross discipline, inclusive guidelines” is needed. The authors suggest hanging on for the Generative Artificial Intelligence and Natural Large Language Models for Accountable Reporting and Use (Cangaru), which a team including Professor Cacciamani is developing.

The hope for Cangaru is that it will be a training tool for researchers, provide a framework for assessing manuscripts and assist scientists and policymakers evaluate papers.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!