Exploring the potential and limitations of ChatGPT for academic peer-reviewed writing: Addressing linguistic injustice and ethical concerns
Abstract
ChatGPT is a language model created by OpenAI, utilising neural networks and the transformer architecture for Natural Language Processing (NLP) tasks. The model's popularity has been immense, gaining 100 million users in two months, and Microsoft announced a multibillion-dollar investment in OpenAI. This commentary explores the potential and limitations of using ChatGPT for academic writing for publication. It can assist in editing tasks such as spell and grammar checking, summarisation and translation, but raises ethical questions about the use of AI-generated text in academic work. The potential of ChatGPT lies in its ability to address the issue of linguistic injustice faced by non-native English speakers in academic publishing. With its support, researchers can communicate their findings in English more effectively. Moreover, writers can leverage ChatGPT's personalized feedback to improve their writing style and gain new perspectives to enhance their content. However, it is essential to note that the accuracy of ChatGPT's insights is limited by the quality of the information fed into it, and it can generate incorrect text. Thus, it is not advisable to rely solely on ChatGPT for writing assistance.