The COPE 'Artificial intelligence (AI) in decision making' discussion document introduces issues to be considered alongside the opportunities AI solutions offer in the publication process, with recommendations on best practices.
The use of AI in the publication process is intended to increase the speed of decision making during the review process and reduce the burden on editors, reviewers, and authors. The adoption of AI raises key ethical issues around accountability, responsibility, and transparency. The guidance provides recommendations, highlighting issues of significant interest to publishers, editors, and authors in light of current technologies, and adds some insights towards future developments along with suggested reading materials.
- AI tools can offer data driven guidance to assist humans in decision making or offer automated decision making without human intervention.
- ‘AI’ should not be used interchangeably with ‘automation’ - automation refers to rules based software, whereas AI refers to technology which learns and replicates a level of human intelligence to make decisions or return information.
- AI and automation tools have demonstrated success in assisting faster and accurate peer review.
- Accountability must ensure the technology is non-discriminatory and fair.
- Responsible application of technology requires human oversight, checks, and monitoring.
- Transparency of processes must ensure technical robustness and rigorous data governance.
- Bias in data sources and potential bias in the design of the tools should be identified and corrected. Where it is not possible, transparency on the limitations is essential.
- Recommendations are made for publishers, editors, and authors on the ethical application of AI.
About this resource
Cite this as
COPE Council. COPE Discussion document: Artificial intelligence (AI) in decision making — English.
©2021 Committee on Publication ethics (CC BY-NC-ND 4.0)
Version 1: September 2021
Full page history
1 December 2021