A group of six authors published a study in a peer reviewed journal, comparing the efficacy of the same class of two drugs (A and B) with a placebo and with each other. One year later the lead author of that study was searching in Medline for new evidence on the efficacy of drug A and found a study that had been published in another peer reviewed journal the year after his by three authors from another country. The authors had changed the number of patients, the type of surgery, the regimen of drug A, and they had added a fourth group (drug C). However, the author of the first paper identified similarities between the two publications. After having read both papers very carefully the editor came to the following conclusions: 1. Most of the second paper uses literally identical sentences and wording as the first. This concerns all parts of the paper. The only significant “new” sentences, mainly in the discussion, are on the role of drug C. 2. The second paper cites 27 references, 17 of which are identical to the references in the first paper. Of the 10 “new” references, six are on drug C, and two on issues related to the surgery. 3. Demographic and surgical data, reported as means ± SD, numbers, or medians (range) of drug A, drug B, and placebo groups are identical in the two papers. The only differences between the two papers concern type of surgery, and the method of postoperative analgesia (two different analgesics are used). Also, the first paper reported on the estimated drug costs; the second did not. 4. Reported postoperative VAS pain scores (median and ranges) of drug A, drug B, and placebo groups are identical in the two papers at five of five time points. 5. In the second paper demographic data and pain scores of the drug C and drug B groups are identical. 6. The reported statistical analyses are identical, including the “ranked sum test of Raatz, ” a test that is very rarely if ever used in the medical literature. 7. Power analyses are identical. However, the authors of the first paper concluded that 43 patients per group were required; the authors of the second paper concluded from the same power analysis that 17 patients were needed. 8. In the second paper, the reported incidences of nausea and vomiting with drugs A, B, and placebo are sometimes identical and sometimes different from those reported in the first paper. 9. For all drug A vs placebo, drug B vs placebo, and drug A vs drug B comparisons, the p values for efficacy are identical in the two papers. 10. Both papers report an astonishing p<0. 000006 in favour of drug A compared with placebo for the difference in the incidence of vomiting. Both papers use Fisher’s exact test for analysing differences in the incidence. 11. Both papers report a p<0. 009 in favour of drug A compared with placebo for the difference in the incidence of nausea. 12. The second paper cites the first paper twice, once in the introduction and once in the discussion. Both citations are out of any context. According to the Royal College of Physician of London (1991), this represents serious scientific misconduct as it is about piracy, plagiarism, and fraud. It is very likely that actually all the data in both papers have been made up. The authors have copied the results of the statistical analyses (and the power calculation) of the first report into their new report, without even realising that some of the analyses in the original report were flawed. How should the editor proceed with this case?
_ This seems to be a serious case of plagiarism/fabrication, and sufficiently serious as to constitute fraud._ The authors should be asked for an explanation within a specified time limit, and if there was no response to refer the matter to the employer/institution. _ The editor should write to the editor of the other journal, informing the authors of his intentions _ Did the second paper involve the pharmaceutical company that had manufactured “C”? _ Both papers should be shown to a statistician with experience in determining whether the data were likely to be fraudulent.