How AI could be slowing your recruitment

While most institutions focus on how to satisfy TEQSA’s request for an assurance on assessment security in June, few appear to be looking at how generative AI might be slowing enrolments.

Recent research by IDP, found that 45% of the world’s students used ChatGPT or another form of AI to help decide which institution to study at – rising to an extraordinary 75% of students in China. The same study also found that similar proportions of students use ChatGPT or other AI to decide which subject to study (47% globally, 74% in China).

So what does that mean for your institution?

Future Campus took advantage of the break to take a look first hand at what ChatGPT suggests in relation to study options. We used 20 different prompts simulating questions that might be asked by students (eg “Is it better to study in Sydney or Melbourne” and “Which Victorian institution is best to study a Bachelor of Accounting,” “Would it be better to study science at the University of Newcastle or UTS?”) We also translated the questions into Mandarin, to see if the responses varied in a different language (from our limited viewpoint, responses appeared similar in different languages, but further investigation by linguistically talented researchers is clearly required).

There are findings relevant to each institution, but for the sake of brevity, the key issues that emerged were:

  1. Location, Location: responses often focused on the comparative merits of geographic locations. Location was typically the first attribute mentioned for many university comparisons. The results reinforce the existing challenge of recruiting students to regional locations by making them sound relatively unattractive. For example, in response to a question of whether I should choose to study accounting in Mt Gambier or Adelaide, ChatGPT responded; “Regional Setting: Mount Gambier offers a quieter, more regional setting compared to Adelaide. If you prefer a smaller, close-knit community and a quieter lifestyle, Mount Gambier may be a better fit.” For Adelaide, it said: “Urban Environment: Adelaide is a larger city with a vibrant urban environment. It offers a wide range of cultural attractions, dining options, entertainment venues, and recreational activities. If you enjoy city living and want access to a variety of amenities and opportunities, Adelaide may be more suitable.” No prizes for guessing which one students will pick.
  • AI loves institutional rankings too: While the prompts we used were based on questions of which institution or location was best to study in, if we asked for the best institution to study in a particular State and discipline, the results typically cited rankings. This resulted in some poor advice. For example when asked which institution was best to study nursing, the top suggestion was University of Melbourne’s Bachelor of Nursing – a program that doesn’t exist. Presumably this was the top recommendation because of UoM’s high institutional ranking, but the example demonstrates the shallowness of discipline-specific advice from AI at this point.
  • Proof points were few: responses were typically fairly generic and even when accurate, mostly lacked proof points. This may be a reflection on the emphasis of hyperbole over evidence in the marketing materials that content is drawn from, but it is certainly a limitation of responses.
  • Key drivers of choice: Responses typically revolved around factors such as location, industry links, research and cost of living. Responses make strong assertions that appear to mimic university marketing boasts and don’t wade into factors such as student satisfaction, graduate employment rates or entry standards
  • Content can mislead: When asked to compare IT courses at the (fictitious) University of Ardmona vs Swinburne University of Technology, ChatGPT didn’t recognise that UoA didn’t exist. Instead it suggested to consider reputation, quality and industry connections. It provided stronger assertions about eh reputation, and industry integration offered by Swinburne, but offered no facts – making both offerings unconvincing.
  • Definitive recommendations weren’t provided. ChatGPT typically qualifies each response with advice to do your own research but then lists features of the institutions, courses or locations being compared based on what it has scraped from the cyberworld.

Conclusions: Regional institutions hoping to attract international students in key discipline areas will need to examine how they can address the bias currently evident in ChatGPT responses. At the same time, reviewing the way content is structured online and finding new ways to enable students to use generative AI apps and platforms in a more effective and informative way will surely be a key priority for institutions across the country when they get their submission to TEQSA completed.

Future Campus will be holding a webinar on AI and Brand on 23 April Tickets are available here.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!