AI as a (not entirely reliable) Research Assistant

Kien Nguyen‑Trung (Monash U) wanted to know whether AI can read documents summarise interviews and identify key themes for a qualitative research project. So he found out for himself, which rather answered the question in advance, by letting an AI loose on his research files.

Dr Nguyen-Trung reports that ChatGPT was way, way, way faster in summarising transcripts and identifying themes, but, using it means the researcher does not have a “meaningful engagement with the data,” the author found.

And while it generated coding for key concepts, it took multiple instructions to capture the main themes of content and certainly did not displace the researcher.

The exercise revealed potential: GenAI can turn unstructured text into diagrams and flowcharts and grasp basis concepts, “moving towards more abstract levels of analysis.”

Also numerous limitations: GenAI could not thoroughly code a 20,000 word dataset, it took multiple prompts for it to create results in specified format and requires work across multiple platforms.

And for all the information it can access, “GenAI in general and ChatGPT in particular, is not trained in the proper knowledge and rules required for qualitative research.”

The takeout: “getting what we want through ChatGPT is no straightforward task”

“Researchers take GenAI as their assistant and maintain the lead throughout the analytical process. GenAI can support here and there, but in the end, researchers must be able to claim that they produce the knowledge through their intellect, positionality, and reflexivity.”

And yes, FC ran the text through Gemini, which produced a bland and brief summary. So brief appears as if the AI only read the article abstract.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!