You are here

In the news: July 2021

Each month, COPE Council members find and share publication ethics news. This month the news includes articles on artificial intelligence, open science, research culture, peer review, and more. 

Reproducibility

A study that showed that research published in science, psychology and economics journals that was not affirmed in repeat studies, were cited many times more than those which were substantiated in follow up studies, prompts an interesting discussion about whether science is self-correcting or in fact, going in the wrong direction. Are flashier results the least likely to be confirmed in later studies, getting a less rigorous peer review and editorial process? 

Poor reproducibility of published research is a major issue which can lead to harm if the results of the research are spurious but implemented. The author argues that Open Science principles should guide publishers to open up channels specific to reproducibility by linking research that successfully, or unsuccessfully, attempts to reproduce the research and build collective data on replication work. Open Science advocates push for pre-registration of research methodology as well and constantly monitor for retractions.

Research misconduct

Dr. Elisabeth Bik questioned the methodology of an RCT of hydroxychloroquine for the treatment of COVID-19 disease by Dr. Didier Raoult in March 2020. Since then, Raoult has waged a public campaign, including publication of her home address and calling her names such as “nutcase” on Twitter and in interviews and a lawsuit. An international response supporting Bik and the importance of post-publication commentary and scholarly criticism has developed. On a related note, Dr. Frits Rosendaal wrote a 10 point critique of the same paper by Raoult et al, concluding, “This is a non-informative manuscript with gross methodological shortcomings.” He notes that he has not received the same response by Raoult’s group for similar criticisms, raising concerns that others have raised of a gender bias. 

An interesting read about Elizabeth Bik, the microbiologist whose work now is detection of image tampering in bio-medical publications, and her journey to this work. 

Open science

The 2013 US federal mandate that any data collected using federal funding be open and accessible to the public resulted in the requirement by four major federal agencies that fund education sciences research that applicants include specific data sharing plans as part of their grant applications. The authors of this paper note however, that data sharing is rarely done in education research. This paper is a thorough discussion of data sharing, including a specific “how to” section for education researchers, although the steps outlined apply to research in other fields as well.

Research integrity

An online survey of researchers in the field of psychology accessed data management strategies and practices throughout the research process during data collection, analysis and sharing phases.  Few participants reported formal training or use of data-management support services.  There responses indicate wide variability and a large potential for improved practices.

The Swiss Academies of Arts and Sciences, The Swiss National Science Foundation, swissuniversities and Innosuisse have published an updated code of conduct based on principles of reliability, honesty, respect and accountability. One of the primary aims of the code is to train and promote young scientists and improve scientific integrity in all aspects of research and teaching. This new code expands the definition of misconduct by “including questionable research practices” such as unwarranted self-citation or claiming authorship without contributing substantially to the work, and neglecting one’s duty of care and supervision. 

Problematic papers, defined by Cochrane as “any published or unpublished study where there are serious questions about the trustworthiness of the data or findings, regardless of whether the study has been formally retracted” may be included in systematic reviews. In order to guide authors of such reviews about how to handle such papers, Cochrane has developed a new policy and implementation guide.

As part of a group of initiatives by the Indian University Grants Commission to address global concerns raised about the quality of papers published by Indian researchers, all researchers are required to take a course on Research and Publication Ethics in 2020-2021. At the Central University of Haryana, the online course was well received and resulted in a high level of engagement by the one hundred and seventy-seven PhD students. Assessment of whether the lessons learned “stick” into future research activities by the students will be important. 

During the pandemic, there was a public health need for rapid publication of information about the virus, the disease caused by it, it’s treatment and the policies put in place to combat it.  Some of these papers were flawed or the result of misconduct. The slow pace of retractions and corrections stands in stark contrast to the rapidity of the publication process and may have resulted in significant harm (an example being the use of hydroxychloroquine). The authors of this paper argue that there must be a formal, and respected, approach to catching mistakes in the published literature.

Over 5000 grant proposals from 2016-2019 were reviewed by Ethics Experts to identify ethics issues in the proposals. These experts indentified twice as many issues compared to those identified by the authors of the proposals, suggesting a lack of awareness of ethics issue and better researcher training in this arena.

Bibliometrics

Clarivate Analytics has created a new metric, the Journal Citation Indicator (JCI) the goal of which is to provide a “single journal-level metric that can be easily interpreted and compared across disciplines”. It is based on three full years of citation data for articles and reviews. Journals within a given field based on Clarivate’s 235 subject categories will have a JCI calculated and then normalised against other journals within that same category. The number of journals in these fields vary widely (for example Economics contains 373 journals while Andrology has 8 titles) and about a third of the journals are assigned to more than one category. Those with more than one category will be compared to the “mean normalised citation impact across all categories assigned”. Without knowing everything about the different categories, it may be difficult to interpret these results.  

Artificial intelligence

The editorial team of Access Microbiology describes the process leading to the combination of the current journal with artificial intelligence reviews tools and many features of a preprint server. The platform will house preprints, Versions of Record publications, open peer review reports, editor’s communications, author responses and the artificial intelligence review reports. 

In order to explore the potential, pitfalls and uncertainties with the use of Artificial Intelligence to augment the review process by approximating or assisting the human process in quality assurance or the peer review process, the authors trained an AI tool on 3300 papers from three conferences along with the results of the human review evaluations. In this study, there are correlations between the decision process and other quality proxy measures.

Peer review

Focus groups of scholars from social sciences, humanities, natural sciences and life sciences discussed the burden of peer reviewing. Across all disciplines and all career levels of scholars, there is a belief that the reviewing burden is unequally distributed, related to an increase in manuscript submission, inefficient manuscript handling, lack of institutionalisation of peer review and reviewing instructions and inadequate reviewer recruitment. 

In 2005, three PhD students created paper-generating software called SCIgen which generates research articles with random texts, titles and charts. Eighty-five gibberish papers generated by SCIgen were identified by 2012, with more to follow. Between 2008-2020, two hundred and fourty-three gibberish paper have been identified, mostly in the computer science field, and in various publication formats. While some of these may be hoaxes, submitted without the purported authors’ knowledge, or to inflate the scientist’s curriculum vita.

This post on The Scholarly Kitchen addresses a wide range of topics related to the important aspect of journal editing the reference list. There are automated tools available to screen reference lists for potential citation of papers published in predatory journals.  Due to importance, arguably spurious, that a researcher’s citations should be considered in the evaluation of their work, the h-index is increasingly being included in documents submitted for promotion and tenure decisions. This index can be gamed by excessive self-citation and there are instances of authors adding references at the revision stage of the peer review process after there is likely to be scrutiny of the reference list, producing in an “h-index factory“. The author proposes for these, and other reasons cited in the post, that the community of researchers and publishers need to “start casting a more critical eye on what’s going on” in the references lists. 

Peer review week 2021

The theme, selected by an open global survey, is meant to highlight the various identies of the individuals, organsations and population involved in scholarly activities and how they influence peer review, with a goal of building a “more inclusive and equitable peer review process” according to Jayashree Rajogopalan, a co-chair of the 2021 Peer Review Week steering committee. Two hashtags to be used on social media to follow annnouncements about Peer Review Week and to share your organisation’s involvement are #PeerReviewWeek21 and #IdentityInPeerReview. 

Paper mills

A concise overview of paper mills and efforts to detect submitted or published papers from them. 

Research culture

The number of academics who report affiliations with multiple institutions rose from 10 to 16% since 1996. Possible reasons include institutional desire to improve their rankings, individual researcher's desire to increase their networking opportunities and access to resources. The co-occurrence of the increase in these multiple affiliations with allocation of research funding in some countries designed to create performance incentives, may also contribute.

In a plea to members of the computer science research community to exert social pressure to bring colluders into line, an editorial described how a groups of researchers colluded to increase the likelihood of having their papers accepted to conferences by gaming the review process.  

Research culture seemed to change during the pandemic, perhaps becoming a bit kinder.  The author of this article recommends that we try to maintain, and in fact, expand this change. “Seeing the light at the end of the tunnel means that we can examine how we want to emerge from this pandemic. After all, research came together to make vaccines. We must harness that talent to create the world we want to work in fairer, more empathetic, kinder.”

Team science often occurs across departments, institutions, cultures and countries. Stakes are high for all concerned and disputes can arise.  Ways to avoid such disputes are to craft a “ scientific prenup agreement or team charter” to establish processes, spell out roles and responsibilities, as well as how authorship of resulting papers will be determined.  It’s important that the research culture allows for crucial conversations

Preprints

The editor in chief at Research Square, a multidisciplinary preprint platform, points out ways the preprint servers may be poised to expose misconduct by crowd-sourced reports of misconduct.  This could result in avoidance of publication in traditional journals of flawed or fraudulent articles and identification of papers from paper mills. The transparent process of preprint servers might increase the exposure of misconduct. 

Authorship

This blog post provides practical suggestions to avoid authorship conflicts altogether or navigate them if they do occur.

Publication misconduct

Self citation or text recycling, often known as “self-plagiarism”, is common in the sciences, particularly in descriptions of methods used in research. Responding to limited guidance on text recycling, Cary Moskovitz began the Text Recycling Research Project and has released guidance for editors and authors, in order to make the practice when necessary transparent, ethical and legal. 

Alexandra Elbakyan, the founder of Sci-Hub in 2011, has complied with an order by the High Court of Delhi to stop uploading papers to the database. In the past, she has not complied with similar legal action in other countries. It is believed that she has complied this time as she thinks she may be able to win the case against Sci-Hub, making the site legal in at least one country. 

COPE Council Member Nancy Chesheir