Artificial intelligence (AI) has been attracting great interest in recent years with breakthroughs like the solving of the protein folding problem by the team at Google DeepMind. Despite this, few have considered how AI might impact, or even be applicable to, academic writing and publishing. A recent development has completely changed all of that.
GPT-3, a powerful AI-based language prediction model developed by OpenAI, has gained fame in the past for such reasons as its use in writing an article published on The Guardian. It once again made news in June, when it was revealed that two researchers, Almira Osmanovic Thunström and Steinn Steingrimsson, had GPT-3 write an academic article about itself with minimal prompts or interference by humans, and submitted it for review to an academic journal. Notably, GPT-3 is credited as the lead author in this paper, and while the output was found somewhat lacking in complexity, it was nonetheless impressive (the pre-print article can be found here.
While this may, at first glance, appear to simply be an interesting stunt, it can have potentially far-reaching consequences for academia. News organizations like Forbes, have already reported on the use of AI in content-generation (and Forbes in fact use it themselves). This kind of automated journalism represents a new paradigm that could soon see implementation even in academia. As tools like GPT-3 acquire more impressive capabilities, scientists may begin to use them to generate research reporting. It will thus be increasingly important to consider the implications with regard to publication ethics, crediting, scientific credibility, and review. As noted by Almira Osmanovic Thunström, tools such as GPT-3 have the potential to up-end the very fabric of academic publishing, as they could allow scientists to generate entire manuscripts in a matter of hours. Future developments will surely be cause for both caution and excitement.
Click here for the Japanese version.