You are here

Artificial intelligence in the news

This page is regularly updated

Go to latest news update April 2023

It is rare that the mainstream media and the specialist world of publication ethics coincide. However, since the release of OpenAI’s ChatGPT tool at the end of 2022 (registrations are currently suspended because of demand), both have been busy with talk of the possibilities and ethics of artificial intelligence tools for content creation. In such a rapidly evolving area much discussion is still exploratory; firm guidance is likely yet to come once the possibilities and implications of tools like ChatGPT bed in. However, some of the key themes and questions are starting to emerge, and we offer here a brief overview of current discussions.

What are AI writing tools?

The defining feature of Artificial Intelligence tools is that they ‘learn’ (hence ‘intelligent’) by being trained on a specified body of information, whether that is identifying images as ‘cats’ or ‘not-cats’ or – in the case of this newest generation of Large Language Model (LLM) AIs – the structure and conventions of language. The latest AI writing bots model the likelihood that certain sentence structures, phrases, topics and writing conventions go together, enabling them to produce text on almost any topic.

Threats and opportunities

As this study of AI in higher education observed last year AI is often seen in very binary terms, particularly that of threat as opposed to opportunity. While no one doubts that AI tools are here to stay – and some scholarly publishers are already using automated tools with a degree of AI for tasks like  generating research questions and assisting with literature reviews, generating slide decks from a prompt, and creating and editing original videos from text prompts (see the AI discussion at a COPE Forum for more on this) - there is a distinct undertone of unease about the newer LLM tools. Their ability to mimic human expression is awe-inspiring; but it also raises potential ethical challenges especially surrounding authorship, verifiability and originality.

Can AI be an author?

The question of authorship is a particularly important topic of debate. As Chris Stokel-Walker discussed in a recent article on Nature.com, AI bots are already being cited as authors in journal articles and preprints, and policies are racing to keep up with the ethical questions this raises. WAME published a set of recommendations in January 2023 which state unequivocally that AI should not be cited as an author because it cannot take responsibility for what it has produced; nor is it a legal entity to which recourse can be taken or copyright assigned (a recent paper on Nature.com has highlighted similar issues with regard to patents awarded to inventions designed by AI). Both Springer Nature and Taylor & Francis have also declared that authors must give details of any input from AI tools in acknowledgements, methods sections or similar rather than citing them in the author list. There is an emerging consensus calling for an ‘assisted-driving’ approach to AI: all content should be supervised and revised by a human author. Asking ChatGPT itself whether it fulfils all the ICMJE criteria for authorship produces a clear answer: no, it does not.

The ethical risks of AI

It is clear that tools like ChatGPT, Bloom and others of their ilk offer new potential for people to subvert ethical practices. Since the modus operandi of such tools is to return text that is probabilistically plausible as opposed to correct, irresponsible use (for example not checking facts and citations for actual veracity or accuracy) can seed incorrect or misleading information in the scholarly literature. This is clearly of particular concern in subjects where public health, legal standing or finance is a concern. There are also potential issues for editors in detecting the use of AI if authors do not declare it: OpenAI is working on a watermark for outputs generated by ChatGPT, while Turnitin is working on AI-writing detection tools. A new app, GPTZero, was launched in January to detect work created by AI writing bots. However, there are doubts about how reliable any of these will be.

There are also issues surrounding how AI can interact with other unethical practices like paper mills and authorship for sale. It is possible that AI tools will actually lower the incidence of some of these: why would an author bother going to the expense and risk of commissioning a paper if they could ask a bot to create something unique for them (though note that this article in ACS Energy Letters suggests otherwise)? In other ways, though, AI could normalise unethical behaviours by creating doubt over credit being given for work done and facilitating plagiarism. For this reason WAME is calling for open access tools for publishers to assist with detection.

Wider ethical issues

Others are more concerned about more philosophical questions about the impact of AI writing tools on the nature of writing and human creativity. Several authors have pointed to the potential for ChatGPT and its fellows to suppress creativity and differentiation in writing styles, while articles on its use in Higher Education have focused on harnessing its presence as a vehicle to explore how to teach writing skills to students. On the other hand, such services may be a great leveller for authors who lack skills or confidence in academic English and assist those with learning disabilities. Prohibiting access to assistive or adaptive technologies may have human rights implications. More unpalatable is the evidence of hateful content and bias in the data used to train these tools. OpenAI have tried to deal with this by outsourcing the scrutiny of disturbing material to an overseas company paying minimal wages, which has not helped its ethical reputation. Another article on Scholarly Kitchen has highlighted ChatGPT’s ability to ‘lie’ (at least, to come up with information which is reliable in only the probabilistic sense). We would remind readers that – for now at least – COPE members must be humans.

Join in the discussion at the March COPE Forum on AI and writing.

Update March 2023

Detection tools

As LLMs become more ubiquitous concern has grown about our ability to detect their use in many areas of life. A recent paper suggested that humans incorrectly identify AI-generated journal abstracts in 14 per cent of the cases they were given. At the time of writing several tools have been made freely available (see an overview of four of them here). However, accuracy rates vary, and detection can be compromised by altering computer-generated text. Meanwhile, the integration of ChatGPT into Microsoft’s Bing search engine has continued to produce stories of errors and misleading information. It seems that the honeymoon period of LLMs is over, and we are now entering a phase of greater scrutiny and accountability. There are several legal cases open in the US and the UK challenging the use of copyrighted material for training LLMs. COPE’s own recent position statement on AI and authorship certainly supports the need for those working in publication ethics to know what they are dealing with.

Update April 2023

LLMs become more sophisticated – as do responses to them

As generative AI tools become ever more ubiquitous attention to their use is becoming more nuanced. Philosophers Ryan Jenkins and Patrick Lin argue that we need to think more carefully about making general statements on AI and authorship. They point out that many tasks done by AI are no more significant than ones we routinely outsource to editing tools, or indeed, other humans. In particular they advise we think about the impact of the AI contribution to the final text, and to whether that would deserve credit if done by an author. Meanwhile, one journal which had published an article listing ChatGTP as an author, has issued a corrigendum, moving the bot instead to the acknowledgements.

Authors continue to challenge their readers with use of AI: one article submitted under the name of human scholars on ‘Chatting and Cheating’ was later revealed to be written by ChatGTP. Interestingly, it concluded that AI tools ‘raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism.’ Google is beta testing its Bard AI tool as part of its search functionality; results produced by Microsoft’s updated and AI-powered Bing search have attracted some negative attention. Perhaps most significantly, OpenAI have just released ChatGPT4, which is said to have greatly enhanced capabilities with output that is predictable and reliable. However, an open letter calling for a pause on the development of 'AI systems more powerful than GPT4’ while further research is done into their capabilities has attracted some high-profile signatures. Has the enthusiasm for LLMs crested its peak?

To read more about how these tools are affecting scholarly publication ethics, see the summary of our recent Forum discussion on AI and fake papers. COPE plans to continue to answer the questions presented there and we will provide updates. 

Go to top

Alysa Levene, COPE Operations Manager

Related resources

Page history

Page last updated: 31 March 2023