Exploring the potential and limitations of ChatGPT for academic peer-reviewed writing: Addressing linguistic injustice and ethical concerns
Abstract
ChatGPT is a language model created by OpenAI, utilising neural networks and the transformer architecture for Natural Language Processing (NLP) tasks. The model's popularity has been immense, gaining 100 million users in two months, and Microsoft announced a multibillion-dollar investment in OpenAI. This commentary explores the potential and limitations of using ChatGPT for academic writing for publication. It can assist in editing tasks such as spell and grammar checking, summarisation and translation, but raises ethical questions about the use of AI-generated text in academic work. The potential of ChatGPT lies in its ability to address the issue of linguistic injustice faced by non-native English speakers in academic publishing. With its support, researchers can communicate their findings in English more effectively. Moreover, writers can leverage ChatGPT's personalized feedback to improve their writing style and gain new perspectives to enhance their content. However, it is essential to note that the accuracy of ChatGPT's insights is limited by the quality of the information fed into it, and it can generate incorrect text. Thus, it is not advisable to rely solely on ChatGPT for writing assistance.
Downloads
Published
How to Cite
Issue
Section
License
The copyright for articles in this journal is retained by the author(s), with the exclusion of the AALL logo and any other copyrighted material reproduced with permission, with first publication rights granted to the journal. Unless indicated otherwise, original content from articles may be used under the terms of the CC-BY-NC licence. Permission for any uses not covered by this licence must be obtained from the author(s). Authors submitting to this journal are assumed to agree to having their work archived in the National Library of Australia’s PANDORA archive.