As first reported by The Guardian, nearly 7,000 confirmed incidents of students using AI tools like ChatGPT to cheat were documented in the 2023–24 academic year—more than triple the rate from just a year prior.
The Guardian’s investigation, based on data obtained from 131 universities, shows that AI misuse now accounts for a growing share of academic misconduct, while traditional plagiarism is declining. Experts believe these figures are likely underestimates, calling them “the tip of the iceberg.”
The data shows a shift in how students are breaking the rules: confirmed plagiarism cases dropped from 19 per 1,000 students in 2019–20 to a projected 8.5 per 1,000 this year. Meanwhile, AI-based cheating continues to rise, expected to reach 7.5 per 1,000 students by year’s end. Academic staff and researchers warn that existing plagiarism detection tools are ill-equipped to detect AI-generated text—especially when students use “humanising” tools to rewrite AI content and bypass detection systems.
Many universities have yet to formally categorize AI misuse, with over 27% not recording it as a separate infraction. According to Dr. Peter Scarfe from the University of Reading, AI cheating is uniquely hard to prove, making enforcement challenging without risking false accusations. While some students use AI ethically—for structuring essays or assisting with learning difficulties—others exploit it to bypass assessments. Videos promoting AI-based essay help flood platforms like TikTok, further complicating efforts to maintain academic integrity.
Experts argue that universities need to fundamentally rethink assessment strategies. Rather than relying on outdated testing models, institutions are encouraged to design assignments that emphasize skills AI cannot easily replicate, such as interpersonal communication and critical thinking. Government and tech companies alike are investing in the educational potential of AI, but the pressure is on higher education to ensure students are using these tools to learn—not just to cheat.
