With generative AI tools now widely accessible, educators say they have to shift from policing to adapting assessments and teaching practices.
SINGAPORE: When Tim was tasked with a written assignment last semester, the third-year engineering student at Nanyang Technological University (NTU) simply turned to ChatGPT.
Using his senior’s essay as a reference, he asked the generative artificial intelligence tool to construct a new essay. He then rewrote it into something he was “capable of” and submitted it as his own.
“It’s very hard to get caught,” said the 24-year-old, who requested that his real name not be published.
Tim is part of a growing generation of university students who turn to AI to help with academic work. As universities grapple with managing this shift, educators are finding it increasingly difficult to identify and regulate AI misuse.
In April, three NTU students were accused of misusing AI for false and inaccurate citations. The students disputed the claims and raised concerns about due process. NTU later held consultations with two of them and is convening a review panel that will include AI experts for one student’s appeal.
The incident sparked a wider debate on AI regulation in universities. Of 13 educators CNA interviewed, most acknowledged that detecting AI use across student submissions is virtually impossible with the current tools available.
Ms Eunice Tan, a lecturer at NTU’s Language and Communication Centre, said AI detection via plagiarism platforms like Turnitin often produce unreliable results and false positives. In one instance, a student who had used AI received a 0 per cent score from the detection tool – indicating it failed to detect any AI-generated content.
Instead of relying on detection tools, she watches for inconsistencies in students’ writing style and checks their work against cited sources.
“In the very worst cases, the students are just doing it for the sake of doing it, and they’ve not read the sources at all,” she said. “You can tell because they don’t even check the generated AI content, and it’s wrong information there.”
Out of the 70 students she oversees each semester, she estimates grading down two to three for AI misuse.
FEW CONFIRMED CASES
Most universities said instructors have autonomy over how AI is used in their courses, within broader institutional policies and guidelines.
An NTU spokesperson said no AI-related violations have warranted expulsions so far.
At Singapore’s other autonomous universities, few confirmed cases of AI-related academic misconduct have surfaced. Singapore Management University (SMU), which has more than 13,000 students, said it has had to address “less than a handful” of such cases in the past three years.
The Singapore University for Technology and Design (SUTD) has also seen only a few integrity violations, mainly involving plagiarism, while unauthorised AI use remains rare, said Associate Provost Ashraf Kassim.
The Singapore University for Social Sciences (SUSS) reported a “slight uptick” in cases, attributed partly to increased faculty vigilance and AI detection tools. Cases of academic dishonesty involving generative AI remain low, its spokesperson added.
The other two autonomous universities – National University of Singapore (NUS) and the Singapore Institute of Technology – did not respond to queries on how many cases of AI misuse they had recorded.
UNRELIABLE DETECTION TOOLS
SMU’s Associate Professor of Law Daniel Seah uses Turnitin as a “first-pass tool” but looks beyond the scores to evaluate his students’ quality of attribution, citation and voice.
“If there is a marked discrepancy between a student’s submitted written work in the open assessment and their demonstrated abilities throughout the term, that is a reasonable basis to treat it as a red flag,” he said.
Certain signs such as cliche phrasing or unnaturally polished transitions can point to AI use, but these markers are not always reliable. “That’s why contextual judgment is crucial,” he added.
To date, Assoc Prof Seah has not encountered any substantiated cases of AI misuse in his courses.
SMU computer science lecturer Lee Yeow Leong agreed that AI detection tools are “not definitive”. He does not allow his students to use AI in proctored assessments, and when students have take-home assignments, he quizzes them on their understanding during presentations.
“This approach ensures that students possess a deep understanding of their work, regardless of whether AI tools were used during the development process,” he said, adding that he has identified “fewer than a handful of cases” of AI misuse so far.
AI USE IS WIDESPREAD, STUDENTS SAY
Many students CNA spoke with admitted to using AI tools for assignments, often without declaring it. With the lack of reliable detection tools, it’s not difficult to get away with it, students said.
Of 10 students interviewed, only two said they were confident their use complied with university guidelines. Most did not want their real names published to avoid getting into trouble in school.
Some described using AI lightly for brainstorming, but chose not to declare it due to the effort required – such as providing screenshots of their ChatGPT sessions.
Manuel, who just finished his first year in business management at SMU, “started playing around” with ChatGPT when he started university and realised he could use AI for generating ideas, proofreading and grammar checks.
Like most other students CNA spoke to, the 23-year-old felt it was okay to use generative AI tools for modules that they deemed less meaningful or valuable to their education.
He recently used AI to generate 80 per cent of a graded assignment for a module he described as “full of fluff”.
Manuel said he usually avoids copying AI responses word-for-word, citing how ChatGPT writing is often obvious. Still, when asked to declare his use of AI, he and his project mates usually understate it by saying they used it for grammar checks.
“You’re digging yourself a hole by telling them what you did,” he said.
Carrie, a third-year humanities student at NTU, said she tries not to rely too heavily on tools like ChatGPT, only using it to summarise texts or as a reference.
“I wouldn’t use it to help me write the entire essay. That’s a bit too much,” she said, adding that the AI output could also be inaccurate.
Still, there’s no way to stop group mates from using AI tools without her knowledge. “I can control myself from using AI, but if other people use it, I also don’t know,” Carrie said.
Pauline, a recent SMU graduate, said she now relies so heavily on AI that her writing has declined. “I don’t think I can come up with such good essays as I did in Year 1 anymore, because I just rely on ChatGPT for everything.”
NTU computer science graduate Jamie Lee used AI where permitted, particularly to optimise solutions for problem sets. An assignment that used to take her a day could be completed in an hour with the help of AI, she said.
She estimated that she used AI in about 90 per cent of her assignments – but mainly to supplement her understanding, rather than as a shortcut.
“Ultimately, I also want to understand how to do the question, so I don’t want to just copy each answer for the sake of doing an assignment.”
ADAPTING TO A NEW REALITY
Educators agreed that trying to catch every instance of AI use would be futile. Associate Professor Aaron Danner, from NUS’ Faculty of Electrical and Computer Engineering, is against blanket bans on AI.
“It’s going to be a lost cause to try to tell whether a student has used AI or not for writing assignments,” he said. “We have to adapt our assignments to this reality.”
Dr Grandee Lee, who lectures at the School of Science and Technology at SUSS, supports a “fit-for-purpose” policy.
If a course teaches skills that AI can replicate, such as computational thinking, summarising and writing, then AI use should not be allowed, he said. But in more advanced courses, AI collaboration can be useful in both learning and assessment.
Some instructors have fully embraced the use of AI in classrooms. Associate Professor Donn Koh, who teaches industrial design at NUS, requires students to use AI in certain assignments.
“Whether AI is plagiarism is no longer the main issue,” he said. “The real challenge is helping students stand out and create differentiated value when everyone has the same AI tools.”
Educators said that building trust between teachers and students will be essential as AI becomes more embedded in academic life.
Dr Lee Li Neng, a senior lecturer at NUS’ Department of Psychology, said AI use should not be turned into a “cat-and-mouse game” between students and teachers. Instead, he advocated transparency so teachers can better understand how students are using AI and adjust their teaching accordingly.
“We have to be honest that many of us are still trying to figure this out as we go along,” he said.
Source:CNA