This is a collaborative project between Sara Schroter, Gary Bryan, Elizabeth Loder (BMJ); Jason Roberts (International Society of Managing and Technical Editors), Tim Houle (Wake Forest University, North Carolina), and Don Penzien (University of Mississippi, Jackson, MS).
Authors Elizabeth Wager, Sabine Kleinert on behalf of COPE Council Version 1 March 2012 How to cite this Wager E, Kleinert S on behalf of COPE Council. Cooperation between research institutions and journals on research integrity cases: guidance from the Committee on Publication Ethics (COPE). Version 1, March 2012. https://doi.org/10.24318/cope2018.1.3
Our COPE materials are available to use under the Creative Commons Attribution-NonCommercial-NoDerivs license https://creativecommons.org/licenses/by-nc-nd/3.0/ Attribution — You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Non-commercial — You may not use this work for commercial purposes. No Derivative Works — You may not alter, transform, or build upon this work. We ask that you give full accreditation to COPE with a link to our website: publicationethics.org
In June 2014 we received a manuscript by four authors from a well known research institution. They described a randomized trial comparing a variation in a procedure with standard care. In total, 200 patients were randomized, 100 to each arm. As measured by an interview, patients undergoing the new procedure were statistically significantly more content than those in the control arm. This manuscript was submitted 116 days after the same group of authors had sent us a first manuscript on the same topic.
The first manuscript, however, described an observational study: 50 patients had chosen the new procedure, 50 underwent conventional treatment. The patients rated the new procedure higher (statistically significantly). In their discussion, the authors mentioned as a limitation the non-randomized status of their study and called for a randomized comparison. At the time we rejected the manuscript because we were not convinced by the non-randomized design of the study. The senior author appealed our decision saying that it was very difficult and almost unethical to carry out a randomized trial. We did not change the decision but I granted the author that we would evaluate and possibly review a manuscript on a randomized study.
In the cover letter of the second manuscript, dated June 2014, the authors referred to this discussion and stated that 100 patients had been randomized to each group. [As an aside, in an online source detailing procedures carried out in the department of the authors, the procedure in question is said to have been performed more than 1200 times a year. As a consequence, it is conceivable that the authors have randomized 100 patients to each study arm during a period of 3–4 months. In his appeal to the rejection of the first manuscript, the senior author mentioned that the ethics committee had already expressed approval. And yet, common experience with randomized trials indicates that the present study would be an extremely fast trial regarding screening, consent, inclusion, examination (4 days after the procedure), and analysis.]
Here is the problem: the results are identical in manuscripts 1 and 2. In numerical form the results are only presented in tables (not in the main text and not in the abstract). In all three tables, the values are identical in both manuscripts. All three tables were submitted as one file, leaving open the possibility that the author mixed up files. The figure (a horizontal, stacked bar chart) is slightly different but the numbers indicating the results, however, are identical. This figure was submitted as a different file.
The main text of the second manuscript is identical to the first one except for minor updates in relation to the numbers of subjects and study design. All four photographs illustrating the procedure in both manuscripts are identical. The reference list is identical.
I can think of only two ways to make sense of this submission: sloppiness or fraud. Under the sloppiness assumption, the authors would have submitted a text referring to their randomised control trial and tables, referring to an earlier observational study. This is conceivable mostly because it is hard to imagine that authors believe they can get away with submitting the same data in two manuscript describing two completely different trials and separated by only 4 months. On the other hand, the tables and figures differ in layout and several details from those submitted with manuscript 1. If sloppiness is not the reason, it must be fraud, and we can only reject the paper.
I feel we should be frank with the authors about our decision to reject the paper. Confronted with this decision, however, the authors have no incentive to cooperate with us and to send us, for example, original data. Rather, they may blame the mess to an unfortunate confusion.
Question(s) for the COPE Forum (1) We have decided to reject the paper but how does the Forum think we should now proceed?
The general feeling from the Forum was that there is enough reason for suspicion to require some sort of explanation from the authors. The editor should ask the authors for an explanation and, unless the explanation by the authors is convincing (which is difficult to imagine), then the editor should forward the matter to the institution of the authors.
Another suggestion was for the editor to ask the authors for a copy of the original study protocol and documentation of ethics approval. This would provide evidence that the trial did occur. If there is no study protocol, this would raise concerns.
The editor told the Forum that the journal is planning on rejecting the paper. However, even if the paper is rejected, the Forum advised that the editor can still contact the authors and tell them that he has identified specific issues with the paper and would like an explanation.
In summary, the Forum agreed that there seems to be some issues of concern with the paper. The editor should ask the authors for an explanation of this strange sequence of events, and if he is not convinced by their response, he has every reason to involve the institution.