The AI threat to original works
Dear Editor,
I have been a university lecturer for 10 years and have taught English, French, Spanish, and linguistics-related courses in different universities and countries.
Often, in my courses, I have had to call out students for submitting work that, in my estimation, was not produced by them. However, sometimes they tell me, “Sir, I did it by myself.” At times, that is the case, but their work gets flagged because they put forward many claims, which are not supported by evidence or references. Or in the case of foreign languages, they input their compositions into a translating software and submit it as their own work.
One of the things I learnt as a 17-year-old first-year undergraduate student was that many of my ‘original’ thoughts have already been expressed by someone else. That was a puzzling truth to understand; however, I gradually understood what that meant. That is why referencing one’s work is critical in academic writing and research — something that students need to understand from very early in their educational journey.
With the propagation of artificial intelligence (AI), particularly ChatGPT, many sectors, including education, have been experiencing rapid changes. One of my latest research articles on ChatGPT, which is among very few in the Jamaican and wider Caribbean contexts, reveals that the chatbot presents numerous benefits to higher education stakeholders: It makes educators’ work easier, helps them with lesson planning, and saves time and energy. For students, it serves as a tutor, breaks down complex tasks, and gives ideas for essays in real time.
However, a major concern of academic faculty surrounds students cheating, especially as many of our universities are yet to provide policies to govern AI integration and usage in this new era. Based on my knowledge, the Teachers’ Colleges of Jamaica (TCJ) has circulated a policy to the institutions under its umbrella, and the University of Technology, Jamaica is in the progress of formulating its policy. Too long a wait, isn’t it?
Many students often use paraphrasing tools in an attempt to conceal their use of ChatGPT, but platforms like Turnitin and iThenticate help lecturers to detect major infractions.
But we are now seeing a new wave of concerns: AI is now writing like academics and researchers, which makes our job more difficult, especially concerning scientific research and publication.
Imagine sending a manuscript to a journal and the editor-in-chief returns it, flagging it for high AI detection or text that was AI paraphrased. Once you are sure that you did the work by yourself, you will begin to have the same kind of reaction like the students: “But I did it myself; it’s my own work.”
Imagine not having that issue prior and having published numerous articles in top-ranked journals before. How do you now justify to the editorial team that you actually compiled the work in an ethical manner? Imagine writing your master’s thesis or doctoral dissertation without AI’s assistance and your Turnitin report comes back with an unacceptably high similarity rate.
As researchers, we understand how to write, especially as scientific reporting uses specific jargon and a professional tone. The fact that AI uses large language models (LLMs) — a deep learning algorithm that is equipped to summarise, translate, predict, and generate human-sounding text to convey ideas and concepts — it sources its data from research journals, newspapers, and other online sources. Therefore, it is quite likely that AI will sound and write like academics and researchers.
How do we then move forward when we have serious competition with AI, which basically steals our information?
We are losing our voice as academic researchers.
What is also unsettling is that many of us spend hours to conduct research and prepare the manuscript and sometimes pay to get our articles published, only for AI companies to make millions off our work.
Undoubtedly, AI and ChatGPT bring numerous benefits to the education sector; however, it is clear that it also poses serious threats.
Oneil Madden
maddenoniel@yahoo.com