Artificial Intelligence (AI) technology has advanced steadily over the past several years, and has started to introduce data-driven solutions to many processes throughout the publication process. AI tools can be developed to provide guidance to humans based on relevant data, or AI may lend toward automation of some processes without human intervention. Some processes already being considered for AI intervention include: journal selection, topic identification, reviewer suggestion, scope assessment, text duplication checking, and statistical analyses; however, this is not a comprehensive list of AI options, and the opportunities for AI use are expanding at a rapid rate.
With the advancement of AI, questions surrounding the relevant ethics arise as to if, when, and how AI could/should be used. In this forum we’d like to start a discussion about these ethical issues, which COPE will use to develop a larger discussion document on this topic.
Questions for the Forum:
1) Is there a clear distinction between technical automation and artificial Intelligence for automatic decision-making or actions taken in the publication process?
2) Are there processes where full technical automation is acceptable or even expected? Are there processes where full automation would be deemed unethical? Similarly, are the processes where AI-aided recommendations would be expected or deemed unethical?
3) What information do journals need to provide to authors (and reviewers) about AI tools in use at their journal?
4) What happens if an author/reviewer disagrees with a recommendation made by or action taken by AI tools? What types of procedures should be put into place to appeal an AI action?
We welcome comments on this discussion from both members and non-members. Please add your comments below.
The discussion will continue at the COPE Forum on Monday 11 November 2019, which is open to COPE members. Following this discussion topic, members' cases will be presented for discussion and advice from the participants of the Forum.
COPE members need to register for the Forum by Friday 8 November 2019.
Please do leave any comments below, whether or not you are planning on joining the meeting.
Comments are reviewed and, on approval, added below.
Please login as a member or register as a non-member to post comments
Comments
At the current stage of AI development, it would be only prudent to restrict the use of AI to pre-reviewing submitted manuscripts. This gives authors the right either to improve the manuscript or to ignore AI recommendations. The following case study is to explain my opinion.
At some moment, I realized that I spend much more time on arranging peer-review for the papers of limited interest than for the papers that are of interest to a broad readership. It is really difficult to find a reviewer for a paper of limited interest. Nobody wants to read it.
But is there any objective algorithm for evaluating the level of interest except my perception? I trained a neural network, verified it and found that it is not perfect, but works reasonably well. It looked like a tool that authors could use for making their manuscripts more attractive to readers -- that is, for simplifying my task of finding reviewers to their manuscripts.
Unfortunately, I failed to convince the publisher to employ AI for pre-reviewing the submitted manuscripts. But I believe that AI-based pre-reviewing will save the time of editors and does not burden authors.
to post comments
I am not an editor, (only an interested PhD student) but I would like to know if any of you had experience with Meta (meta.science). It is a company now partly owned by the Chan Zuckerberg Initiative, and they aim to facilitate article recovery for researchers. But as a sideline, they also built an algorithm to 'predict impact of papers', which appears to be shared with some editors when articles are submitted. I find this very troubling, and I wondered if any editors have had experience seeing or hearing about this 'impact prediction score'. Thanks for any input!
to post comments
With the incessant increase in published research, increased automation is inevitable – AI will become an important part of this.
AI should be used as a tool to support human decision-making, rather than as a replacement for it. Furthermore, it must not be used as a ‘black box’ which produces a result which we blindly base decisions upon.
It is ethical to use AI as a support tool in areas of content analysis to identify suitable journals, suggest reviewers and identify statistical weaknesses. However, this should only form part of the decision-making process. It is unethical to automatically reject or accept content without human intervention. Decision makers much be made aware of biases in computer algorithms. Authors and reviewers need to be made aware of the AI systems a journal is using and what for – in a similar way we make them aware of use of similarity checking software. Transparency is essential to allow author to understand the decision-making process so that they can challenge it through the normal procedures.
to post comments
I completely agree with @Phil Hurst
to post comments