Research integrity demands an AI taxonomy

a computer chip with the letter a on top of it

Universities are racing to write AI policies, but most leave students and staff guessing. GAIDeT offers a simple, task-based way to disclose AI use in theses and research: name the tool and version, the tasks delegated, and who remains responsible. It takes a few lines, removes stigma, and turns policy into practice — fast.

“The world will never be the same again”

The AI era did not explode overnight. The technology had been maturing for many years, but it only entered our computers when the first generative artificial intelligence (GAI) models became openly available to everyone.

I still remember that moment — an ordinary university meeting at the end of 2022. I asked for the floor and said:

“The world will never be the same again. Text is no longer our routine. From now on, artificial intelligence does it.”

The responses came quickly:

“I wouldn’t be so happy. People will lose their jobs. And what about philologists?”
“Poor lecturers. How will we now check if students wrote their papers themselves?”

I sighed and replied:

“Once there was a wonderful job — a chimney sweep. Another profession — a star counter. We adapted. By the way, the large LLMs were created by philologists themselves. They simply now have different, no less important work.”

One thing was clear: AI is already with us, and this “wild rabbit” will not go back into the hat.

On every computer — and in every thesis

Today, ChatGPT is like just another browser tab: always there, on laptops, phones, and in student dorms.

I know several respected academics who proudly insist they “don’t use AI.” They write “by hand,” yet at the same time, they translate in the browser, polish style in Grammarly, or rephrase with DeepL. That’s AI too — just a quieter version, without the fanfare.

And let’s be honest: in the past year, I haven’t met a single student who hasn’t used ChatGPT. Not everyone admits it right away. But as soon as the conversation turns to shortening a text, finding sources, or rephrasing, GPT appears as a familiar assistant.

And there’s nothing wrong with that.
Students live in a new world and sensibly offload routine tasks to a tool that truly saves time. Isn’t that what we always wanted — more space for thinking, analysis, and creativity?

The problem lies elsewhere: rulemaking and regulatory systems are lagging.
Lecturers complain: “These theses smell of GPT”.
Reviewers sigh: “The texts are smooth, flawless — but all too similar.”
And policymakers speak in terms that are far too general.

Detectors, Watermarks, and Stigma: How the Conversation Hit a Dead End

When it became clear that AI had permeated everything from student theses to grant applications, there was a flood of publications and opinion pieces — suddenly, everyone became an expert and rushed to propose solutions .

Some suggested watermarking all AI-generated text. Others called for banning generative models from science altogether. Some demanded that authors publish their prompts. In response came the retort: Don’t stigmatize!” Marking text this way would only add distrust and shame. And ultimately, pretending that no modern text has had any AI assistance is, at the very least, insincere and hypocritical.

Into this chorus entered AI-text detectors — tools that promised to “catch” machine-written sentences. They were quickly built into plagiarism and integrity systems, only for it to become clear that they:

    1. frequently produce false positives,
    2. perform worse for non-English authors,
    3. are easily “fooled” by paraphrasing or translation,
    4. and most importantly — foster a climate of suspicion, where even honest work comes under doubt.

At the same time, universities began adopting policies on AI use — and that is a positive step. Yet most of these documents remain overly vague, especially regarding disclosure of AI contributions. The typical formula is: “AI use must be declared” — full stop. Without answers to basic questions: where in the work should this be done (title page, methods, appendix?), what level of detail is expected (tasks or prompts?), who is responsible for the content, whether this applies to theses and coursework, and how such disclosure will be evaluated. In this climate of uncertainty, making a transparent declaration is often more complicated than staying silent.

In the end, everyone understands that some form of disclosure is needed, but no one knows exactly what or how to disclose. Authors are afraid of saying too much. A sense emerges that using AI makes the work somehow “less real,” and that being transparent may even be harmful.

Thus we enter a vicious circle of stigma. And while we spin in it, the chaos only deepens. The simple question remains unanswered: How can one clearly and comfortably say that AI was used — without looking like a rule-breaker?

Time to Act: GAIDeT — A Simple Answer in a Complex Environment

The answer already exists — GAIDeT, the Generative AI Delegation Taxonomy.

It is not a detector, nor another vague formula like “we used some AI somewhere.”
GAIDeT is a shared language that allows us to briefly and without stigma explain: what exactly was delegated to AI, at which stage, with which tool, and who remained responsible.

For example:

“The authors declare the use of generative artificial intelligence during the research and writing process. According to the GAIDeT taxonomy (2025), the following tasks were delegated to GAI tools under full human oversight: literature search and systematization; code generation; data analysis; translation; ethical risk assessment. GAI tool used: ChatGPT-5. Responsibility for the final version of the manuscript lies fully with the authors. GAI tools are not listed as authors and bear no responsibility for the final results. Declaration submitted by: [x].”

This level of specificity sends a powerful signal: AI was not a hidden co-author, but a transparent, well-documented assistant. When such statements become standard practice, the stigma of AI use begins to fade — disclosure becomes normal, not suspicious.

GAIDeT is already being adapted by journals, universities, and reporting templates. It can easily be applied to theses, coursework, grant applications, and research projects. It is not a cumbersome standard but a clear, structured declaration that preserves human accountability.

The taxonomy is openly available and comes with the online GAIDeT Declaration Generator, which creates a ready-to-use disclosure statement based on your selections. Researchers and students can insert it directly after acknowledgements in a paper or include it in a thesis.

Universities Benefit the Most

GAIDeT is not about control or sanctions. It is about trust, clarity, and a modern academic culture where AI is an instrument — not a trigger for a witch hunt.

Why do universities stand to gain the most?

    1. Lecturers get clear guidance. No need to guess “was GPT used here?” If AI use is transparently declared, they can focus on what matters: ideas, structure, argumentation.
    2. Students gain a sense of safety. If they use a tool responsibly and openly, it falls within the rules. Less anxiety, more learning.
    3. University leaders get a ready-made policy block. GAIDeT can easily be integrated into:
      1. thesis and dissertation templates,
      2. academic integrity regulations,
      3. author and reviewer guidelines,
      4. internal faculty procedures.

This can be implemented within a week at no additional cost.

In Conclusion

AI is already here: in our texts, classrooms, browser tabs, and Google Docs. There is no going back — and perhaps there shouldn’t be. The real challenge is not the technology itself, but the confusion and mistrust surrounding its use.

It is not AI that undermines academic trust — it is the lack of clarity.

GAIDeT offers that clarity. It turns disclosure into a simple, human act: straightforward, honest, and practical.

Professor Yana Sychikova is Vice-Rector for Research at Berdyansk State Pedagogical University.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!