Are universities over-reliant on Turnitin?
Dear Editor,
Turnitin has become a widely used tool in higher education for detecting textual similarity and promoting academic integrity, especially in academic writing and other written content-based modules. However, its adoption also raises several pedagogical, ethical, and practical concerns that institutions must navigate carefully.
One of the most significant issues is that many lecturers and students misrepresent the similarity percentage score. Interestingly, a high similarity score does not automatically indicate plagiarism and academic dishonesty, and a low score does not guarantee originality. Standard elements in academic writing and research, such as references, headings, and commonly used terminologies, may inflate similarity. This can lead to unfair judgements or unnecessary anxiety among students.
In fact, there are mounting complaints from students that their work is being flagged for having a high possibility of being produced by generative artificial intelligence (GenAI). Up to recently a student called me and was enraged that their lecturer is adamant about deducting marks for a coursework because of the similarity report produced by Turnitin. The student maintains that she has produced her work with the use of GenAI.
Interestingly, Turnitin has flagged my own research, so I can understand the frustration of an innocent graduate-level student. A journal editor recently returned my manuscript because Turnitin said 65 per cent of it was produced by GenAI. The acceptable percentage is up to 25 for this particular journal.
I had to make several efforts to ‘humanise’ my own natural writing because a machine decided my brain was too competent to produce high-quality work, even after writing academically and professionally for many years.
Should a student appeal or contest their lecturer’s misguided position of academic misconduct, they may be seen as liars. But some people are just strong writers and critical thinkers.
While Turnitin claims to detect AI-generated text, issues remain: false positives (flagging human writing as AI) and false negatives (missing sophisticated AI paraphrasing). The tool can also be biased against second-language writers who struggle with paraphrasing. This creates confusion and fairness concerns in misconduct decisions.
The issue can also evoke stress, anxiety, and emotional pressure on students. The anticipation of a similarity report can cause fear of unintentional plagiarism — students worrying about acceptable percentages, pressure to over-edit or rely on paraphrasing tools, which can reduce the quality of work produced and undermine students’ confidence in their work.
If Turnitin is used as the sole determinant of academic honesty, human judgement is minimised, contextual factors are ignored, and misconduct cases may be mishandled or escalated. Consequently, AI and algorithmic tools should support, not replace, academic decision-making.
The concern of false positives is affecting academics, researchers, and students worldwide. Undoubtedly, there are students and lecturers among us who rely heavily on GenAI, but what happens when we cannot trust the mechanisms in place to ensure academic honesty? Certainly, universities and colleges need to re-examine their AI policies.
Oneil Madden
maddenoniel@yahoo.com