As first reported by The Straits Times, Nanyang Technological University (NTU) confirmed that three students were penalised for academic misconduct after their assignments were found to include content allegedly generated by generative AI tools. The university cited fabricated references, broken links, and non-existent data in the students’ essays for a health and politics module, which explicitly banned the use of tools like ChatGPT. The students were given zero marks for the assignment, which accounted for 45% of their final grade.

Source: wikipedia.org.
Two of the students admitted to using AI tools — one knowingly, the other unknowingly through an AI-powered essay service — while the third insisted she did not use AI to generate content and instead relied on a reference organiser. All three students say they submitted evidence to support their claims, including updated citation lists and time-stamped writing logs. Despite this, they said their appeals were denied, and they were issued formal academic misconduct warnings, which affected their GPAs.
The students also raised concerns about NTU’s disciplinary process, saying they faced communication barriers, limited opportunity to defend themselves, and inconsistent treatment. One student, who posted about her case on Reddit before speaking to The Straits Times, said she was “shut down” in discussions and that her case was decided via email without a proper panel hearing. Another said that despite his professor initially deducting only partial marks, the School Academic Integrity Officer escalated the penalty to a full zero without offering a new hearing.
NTU maintains that the module’s guidelines were clearly communicated and repeatedly emphasized throughout the semester, and that fabricated citations constitute serious academic misconduct. However, it did not answer questions regarding its criteria for detecting citation-related AI misuse or whether tools like citation generators are categorised as generative AI. The case has sparked debate about transparency and fairness in how AI-related academic rules are enforced, especially in light of Singapore’s broader university policies, which allow AI usage under strict conditions.
The controversy underscores the complex and evolving role of AI in education, where institutional rules may not keep pace with technological realities or student usage patterns. As universities navigate this new terrain, clearer policies, more transparent enforcement, and open channels for student feedback may be necessary to uphold both academic integrity and procedural fairness.
