Unprompted: The AI Hallucination
LanguageCert, 17 June 2025
In this article, Cathy Jones, Assessment Development Specialist, looks at how generative AI isn't as clever as some users assume, and why it is so important for students to develop critical thinking and writing skills
The whole enormous ecosystem of public examinations and high-stakes tests is built on trust. From vice-chancellors to teenagers, everyone must have confidence that tests are fair, reliable and secure. At university, as part of this ecosystem, trust is essential. There is an expectation that administrators, faculty and students share a commitment to honesty and learning. Since the arrival of large language model, generative AI there is growing concern about how students using AI for coursework and take-home assessments is antithetical to trust and an anathema to the purpose of university education.
The problem is growing rapidly, and this year could be the tipping point after which solutions must be found to the problems generated by AI. Detecting misuse can be difficult and time-consuming. Conclusively proving misconduct is even more difficult, time-consuming and problematic. At the UKENIC24 conference, I spoke about how wider, flexible use of vivas and spoken assessments combined with a greater role for oracy could be part of the solution to the misuse of AI. If vivas are to be used more, students must have the requisite speaking, listening and verbal reasoning skills. Vivas are resource heavy. Every student needs an examiner, but rooting out and adjudicating misconduct requires a panel of experts and the burden of proof.
What does Artificial Intelligence know?
My advocacy for oracy is not at the expense of writing skills. Students must work out their chosen subject areas for themselves and be able to both write and verbalise their knowledge and understanding. AI can’t do this for them – it knows nothing, can teach nothing and gives students nothing of worth.
What does AI offer? Its developers claim it provides the answer to any question. This claim is very far from the truth. What AI gives is a response: an answer not the answer. It cannot be otherwise because AI has no knowledge and cannot tell right from wrong or truth from fiction.
AI responses are essential exercises in form. Grammatically correct sentences that read naturally, but there are tells and giveaways like hyperbolic adjectives and superlative modifiers. AI builds sentences but does not construct meaning. That is why the sentences in an AI response do not make an argument in a paragraph or there are no paragraphs. Just bullet points summarising concepts and themes.
This lack of argument, meaning or content is an inherent, if not tragic, flaw of AI. As are AI ‘hallucinations’, a deliberately misleading euphemism for false information, quotes and references. The response to a prompt might be right, wrong or utter nonsense. Hallucinations aren’t glitches or bugs. They are the inevitable result of how AI works. Although hallucinations aren’t bugs they are contagious and can spread from one response to another. AI works by drawing its responses from existing online information, and more and more web-based content is being produced by AI. One algorithmic hallucination leads to another. AI is creating and feeding off its own misinformation like a snake eating its tail.
It isn’t meant to be easy
Writing is hard. It is hard in your first language and harder in a second or third, etc. In the monolingual world of anglosphere education, it can be forgotten that English may well be a student’s fourth language after the languages spoken in the home, regional and state level. Writing about a subject you know well is difficult and writing about a subject you are learning is more difficult still. Writing under deadline pressure and the weight of expectation adds more incentive for taking the generative AI easy way out.
But writing isn’t meant to be easy. There are no quick fixes. Writing is cognitively and linguistically demanding. Before you can express your knowledge or thoughts you must do the learning and the thinking. An analysis of writing and the interplay of cognitive and linguistic processes quickly becomes a Gordian Knot of interconnected Moebius Strips. Simply put if you aren’t writing, you aren’t thinking and you aren’t learning. And isn’t that what university is for?
At university developing an academic writing style takes time. No student, international or otherwise, is expected to produce the same level of academic writing in their first term as that expected in their final assessment. Furthermore, each field of study has its own academic writing conventions, and what exactly is ‘academic writing’? Just as Anna Karenina said ‘there are as many kinds of love as there are hearts’ there are as many definitions of academic writing as there are essays.
Critical thinking and expression
LANGUAGECERT Academic recognises the understandable lack of consensus and the differences between disciplines by assessing the linguistic foundation and prerequisite communicative abilities that are essential to the development of academic writing.
I often talk about the intersection of skills, and critical thinking is at the very centre of the nexus of communicative ability, academic writing and independent thinking. In essence, critical thinking is about ‘the argument’. Identifying, analysing and evaluating the explicit or implicit message - and the supporting evidence - communicated in speech, writing or any form of media. The flip side of the critical thinking coin is the ability to come to one’s own conclusions, express a structured and well-reasoned perspective, and present supporting evidence.
The development of students’ critical thinking is embedded in the design of LANGUAGECERT Academic:
- In the Reading test, Part 3, the test taker reads and answers questions about four separate texts on the same topic by four different authors. The texts demand careful analytical and synoptical reading and require the test taker to display higher-level reading skills, combine rhetorical and contextual information, infer meaning, draw out implications, and compare the authorial positions and arguments.
- In the Writing test, Part 2, the test taker reads short statement containing two contrasting viewpoints. The test taker is asked to write 250 words to analyse, evaluate and discuss the two arguments in a structured, well-reasoned way. Then, the test taker must synthesise the arguments and present their position clearly and convincingly.
- In the Speaking test, Part 4, the test taker is shown a graph or infographic, given time to prepare and then is asked to speak for two minutes about the conclusions they have drawn based on their analysis and evaluation. The test taker is asked follow-up questions to extend and defend their arguments, reasoning and conclusions.
To reiterate: AI knows nothing, cannot tell truth from fiction and gives an answer not the answer. By fostering critical thinking skills, LANGUAGECERT Academic helps students build their own knowledge, make their own judgements and give their own answers.
Winter is coming
AI can get it ‘right’ or good enough to deceive the marker. However, students who misuse AI are having their own hallucinations and misleading themselves. They are not making the most of a university education and are missing the point of what a degree means to employers. Those who do not see the value of studying overvalue a piece of paper. Employers do not want a certificate. They want the knowledge, understanding and hard work that a degree certificate should represent.
Everyone expects students to let their hair down (a little) at university, but not themselves. We trust students to be like ants putting away stores of knowledge and skills while at university in the summer of their youth, but those misusing AI are like the grasshopper believing that they will be okay in the cold, competitive climate of the world after university.