We received a paper with potentially important results. After review and revision, we accepted the paper. On further reflection, and asking more of the authors, we became concerned. It is an RCT and the only protocol available was slim but appeared authentic. There were two protocols: one for a pilot trial and, if that was positive, a second protocol aimed to randomise more people. One residual concern was that there was an imbalance in the two randomised arms. The authors’ statistical advisor has explained that such an imbalance, although large, is not necessarily unexpected when using older versions of random number allocation programs.
We present this case to the COPE Forum for discussion as we had not identified a numerical imbalance that might be unacceptable. We also ask for advice: since the pilot trial showed a significant difference (p<0.001), might it be considered unethical to recruit many more participants before publishing? Furthermore, should the unpublished data from the pilot trial be included in the final analysis?
The Forum argued that this was probably more of a methodological problem than an ethical issue. All agreed that the authors have a responsibility to publish the data from the pilot study or, at the very least, the editor should request that the methods and results of the pilot study are included in the final report. The Forum suggested that perhaps the editor should question the value of publishing the study. If he believes it has value then he should publish it. Other suggestions were to consider writing a commentary on the paper raising these issues.
June 2008
We presented an accepted (but not published) paper where we had concerns about randomisation imbalance and about a pilot trial that had not been presented. COPE reassured us about imbalance and suggested we ask that the pilot trial data be included. We sent a list of our concerns to the authors. The authors’ responses were far from reassuring and they refused to include any information from the pilot trial. We have now rejected the paper and have instituted an investigation of our concerns about the conduct of the trial.
August 2008
We rejected this paper after the authors refused to include the pilot data in the main paper and refused to give us more information on their mode of randomisation or the way they collected side effects. We then received a letter from a libel lawyer. However, our lawyers rebutted the case. We also contacted a government body overseeing drug licensing and trial conduct in this country as the study was done at a private institute where the corresponding author is the clinical director and his wife is the administrative director. Initially someone from that institute agreed to investigate but then the head of the institute and several others were charged with corruption. We have now contacted a further different overseeing institution but have not yet had any reply.
After discussion of this case in 2008, our worries about the paper grew. We could not get direct answers from the author. However, we did find that the data in one of their tables which showed subgroups could not be transformed back to match their table of baseline data, and that the randomisation ratio in the subgroups in that table was highly statistically unlikely to have been obtained by proper randomisation. The authors admit that their computer program had small errors, but the ratios in the table were far too high or far too low in some subgroups. Even though we had accepted the paper, we decided to reverse our decision and we rejected it.
After rejection, we asked different groups in the authors’ country to start an investigation; the author is at a private institution. One person did agree but all they did was look at the paper sent here. The short report sent back did not add anything to what we already knew.
The paper, as published in another journal, does not contain the subgroup table we saw. We are concerned because we know there is a mismatch between the table we saw and the baseline table, and also because of the apparent errors in the randomisation ratio in that table. We also have other questions about this study.
Do we have a duty to take this further? Should we contact the other journal’s editor?
Discussion and advice (June 2009)
The Forum concluded that the editor had done everything in his power. He had previously set up an investigation and gone as far as he could. Some suggested that the editor should contact the other journal and say that he had concerns about the paper during the peer review process but others argued that in the absence of hard evidence there is little that the other editor can do. Most agreed that the editor had exhausted all avenues available to him.
Update (August 2009)
The editor agreed with COPE’s conclusion that there was nothing more to be done. The editor considers the case now closed.