In the news: June 2018 Digest
The Science and Technology Committee of the UK House of Commons met on May 8, 2018 to discuss research integrity. The transcript and a video of the meeting is available on The Science and Technology Committee.
The conversation is wide ranging, Topics included: the relatively poor compliance by British Universities compliance with government mandates to report about internal investigations of research integrity problems; interagency cooperation; sanctions for individuals found to have committed fraud and other types of research misconduct; and the relative roles of government, universities and funding agencies in managing research integrity.
Further information about the UK’s system for research evaluation is discussed by Gunnar Siverstsen in a blog post on the London School of Economics and Political Science website. He argues that the Performance-based research funding system (PRFS)for universities would be incomplete without both components: indicators of institutional performance and panel evaluation and peer review of individual performances. Siverstsen describes the variable adoption of PRFS’s in other European countries, typically without assessments at the individual level. http://blogs.lse.ac.uk/impactofsocialsciences/2018/01/16/why-has-no-other-european-country-adopted-the-research-excellence-framework/
The sweet-spot for how much governmental oversight should exist is discussed by Xueying Han and Richard Applebaum on the London School of Economics and Political Science website. They surveyed STEM faculty at 25 of the best universities in China to determine their greatest challenges in realizing the central government’s goal of becoming a research power house. For the most part, faculty are contracted to their university for 3 year appointments, and undergo evaluations every 1 and 3 years to determine the quality and quantity of their research productivity. This is highly disruptive and stifles innovation and pursuit of big, long term projects. Universities use this intensive evaluation of their faculty in part because their feet are held to the fire institutionally to produce high quality, high quantity research. Researchers noted that there is an excessive oversite by the government of the research enterprise at the University level.
The role of research institutions and journals in cooperating to assure the integrity of the published record was highlighted by Lauran Qualkenbush in a brief post on the Nature website. In it, she endorses the CLUE recommendations. Specific recommendations noted include: sharing research-misconduct reports generated by research institutions with journals; requests to correct or retract publications as soon as the data are known to be false; direct notification of institutions when journal editors suspect misconduct in order to promptly secure the raw data.
The Netherlands Research Integrity Network (NRIN) held a research conference on April 20, 2018 and the PDF’s and abstracts of the presentation are available at https://www.nrin.nl/agenda/nrin-research-conference-2018/. The topics were varied and include use of questionable research practices, responsible research practices in RCTs and initiatives to foster responsible research practices.
Nature and its family of journals has adopted its own ethical oversight of papers that involve stem cells, gametes, or human embryos or for clinical studies of cells derived from pluripotent stem cells. The journal will require a separate ethics statement from the authors and will obtain a separate ethics review for some of these papers, especially those that involve maintaining cultures of human embryos to approximately 14 days or beyond. https://www.nature.com/articles/d41586-018-05030-2
Till Bruckner summarizes a paper, then editorializes about it, published in JAMA in April 2018 in which the authors studied the rates of publication of results from clinical trials on WHO approved trial registry sites and in publications. They report that only 4 of the 20 top non-commerical funders of health research earned “full marks” for demanding trials registration and results reporting, with follow up to ensure compliance with these rules. Major philanthropic organizations, like the Gates Foundation and the Wellcome Trust don’t yet require sharing of results. The reason for concern per Dr.Bruckner: over $85 billion invested into medical research goes to waste every year because results are not available. http://www.peah.it/2018/05/5322/
On May 25th, the EU’s General Data Protection Regulation (GDPR) went into effect. In addition to resulting in inbox overload by companies explaining their new policies, the GDPR affects how data can be sources, stored and used. Several issues remain: each country has been left to make some decisions on their own and will have different systems. To address this, there will be a Code of Conduct for Health Research released to help address some of the possible unintended consequences of this new law.
In the late 17th century, a chemist, Richard Boyle, indicated that he should describe his work such “that the person I addressed them to might, without mistake, and with as little trouble as possible, be able to repeat such unusual experiments”. In a Nature blog post, Philip Stark argues that much of the problem with reproducibility of scientific work likes in the failure to describe the methods sufficiently to meet Boyle’s requirements. He calls this a failure of “preproducibility” and encourages all of us to follow his lead: decline review of papers with insufficient information to be preproducible.
Catherine Winchester describes her role at the Cancer Research UK Beatson Institute to develop best practices for research integrity with its approximately 300 researchers. She works with grant submission, data curation, improvement of manuscripts, and integrity. She notes that her work for data curating has been the most complex, but is now the normal at the institute. Her role is an interesting one, and one that institutions that can afford may consider as infrastructure for its research enterprise.
The American Competitiveness and Innovation Act of 2017 mandated a report about the state of science about climate change. While analysis is showing that there is broad agreement that climate change, largely driven by human activity is real, there is non-reproducibility about forcasts about actual climate effects—temperature, rainfall, etc. Possible reasons include the big data needs of this work—a petabyte of data (1015) pieces of data—even if publically accessable—will not be useable by many study groups.Repeating samples including tree ring data, or deep ice cores from the polar areas is very expensive. https://psmag.com/environment/what-a-reproducibility-crisis-committee-found-when-it-looked-at-climate-science
A survey of trainees involved in bench research showed that about 25% did not receive adequate mentoring; amobut 40% were pressured to produced positive results and about 63% that the pressure to publish influced the way data is reported.
The XPhi Replicabilty project will seek to replicate about 40 studies in experimental philosophy to estimate the replicability in this field that has methods similar to social psychology. In the latter, recent data shows poor reproducibility so the project will look at how well this works in experimental philosophy.
The theme of Peer Review week, September 10-15, 2018, is “Diversity and Inclusion in Peer review. The announcement about the week and ways to participate is at https://peerreviewweek.files.wordpress.com/2018/05/prw-press-release1.pdf
Deborah Sweet published a blog bost on CROSSTALK which examines the pros and cons of publishing peer reviews. They conclude that the arguments against publishing peer reviews are stronger than those that support it so in the family of journals from Cell Press, for the time being, they will not publish the reviews.
Are there exceptions to maintaining confidentiality in the peer review process? What circumstances should actual promote the sharing of parts or all of a paper under review with a research group? Is it legal or ethical to do so? David Kent offers a provocative take on these questions.
Or before peer review? In a study looking at whether researchers reveal their research prior to publication, Marie Thursby and colleagues survey academic researchers in 9 fields. More than 67% disclosed their findings before publically presenting or publishing their work. Why? To solicit feedback, to take credit for their work as early as possible, gain collaborators. Rate of pre-public revelations were highest among mathematicians (78%) and lowest in medical schools.
PEERE is an “EU-funded programme to improve efficiency, transparency and accountability of peer review through trans-disciplinary collaboration”. The Royal Society in the UK will share 13 years worth of historic peer review data in order to help improve the public trust and support the excellence in research and research culture.
Several organizations, such as Publons and the Nature publishing group, now provide online peer review training. However, there is little evidence to support the efficacy of such courses, or inperson trainings, to improve the quality of peer review.
Publons has developed a survey about peer review based on the survey takers personal experience and perceived norms in his or her field of research. If you wish to participate you can find the survey at:
Nobuko Miyairi describes the obstacles to more global uptake of ORCID by Japanese institution. These include existing infrastructure in Japan to promote disambiguation of names, existence of local processes to accomplish similar outcomes as ORCID would require extra resources for institution and the addition of additional complexity for resource allocations. Even so, she is hopeful that a recent ORCID Japan Member Meeting can help move researchers in Japan towards adoption of ORCID.
Dr. Kris Sealey describes a research study for which she is the primary investigator that focuses on “how issues of diversity might be addressed at the level of producing the scholarly object”. They seek to determine best practices and a code of publication ethics in the humanities. They provide a link to an online form for comments from the philosophy community. The comments on her blog are rather interesting as well. Philosophers, it seems, are not averse to being a bit rude, even if they use bigger words than on some other forums.
How should an organization (Research group? Department? Corporation? Country?) measure the impact of the research they do? The Federation for the Humanities and Social Sciences in Canada recommend “flexible and adaptable approaches” with a broad definition of “impacts” and a “diverse mix of impact indicates” which the researchers themselves can help select. http://blogs.lse.ac.uk/impactofsocialsciences/2018/01/10/approaches-to-assessing-impacts-in-the-humanities-and-social-sciences-recommendations-from-the-canadian-research-community/
Most disciplines list authors in order of contribution to the research, while in mathematics and economics, its often by alphabetical order. Matthias Weber argues in a blog post that this is discriminatory and hurts late-alphabet scholars. He reports some evidence to support this based on tenured v non-tenured positions—tenure in a top 10 economics department is 26% higher for faculty with an A-surname than a Z-surname! He argues that authors should insist on a contributorship model to determine order or a random order if people contributed equally.
Who qualifies as an author? A survey of over 6000 researchers in 21 scientific disciplines exposes a widely varied answer set to a seemingly simple question. About 90% noted that individuals who draft the paper or interpret the data would usually or almost always be included, but only about 60% felt that those that execute the experimental plan or design the experiments would be usually or almost always included. Answers varied by discipline and geography.
“Stop the Rot” is the word-bite from this article describing multiple authors assessing publications in presumed predatory journals. These authors expose where papers are coming from and being published in a predatory journal—India had over 500 and the US almost 300. The other countries shown were all < 100. The authors argue that it is unethical to publish research in these journals as they won’t be seen, aren’t really peer reviewed and waste the participation and attendant risks assumed by people and animals in research.
The University Grants commission in India removed 4305 journals from their approved list of journals, including journals published by Duke University Press, Sage Publications, Oxford University Press, Economic and Political weekly. Some of these included only the online version, while the print version was still approved.
Citations of preprint versions of papers that are ultimately published in peer reviewed journals are problematic: the preprint version may not be the same as the final paper; the preprint citation takes that citation away from a journal which would contribute to its impact factor; and the author can’t easily aggregrate information about the impact of her or his research. A lot isn’t known yet about the way preprint servers will influence these kind of metrics. The comments section on the blog is very interesting as well. Representatives from various preprint servers contributed information about how and if the link DOI’s from the preprint server with that of a traditionally published paper.
Global south publications
The African Journals Online (AJOL) and INASP have developed assessment criteria for the quality of publishing practices in journals hosted on the AJOL site to assure readers that the journal meets internationally recognized standards and to help journal editors to improve their practices.
So how do researchers feel about reusing data available because of the move to data sharing? Modeling based on available information, the authors report that data sharers are not necessarily data reusers, researchers in some fields are more likely to reuse data than in others,
Experience in data management practices facilitate data reusing and those who are more able to decode metadata are more likely to consider data reuse. http://blogs.lse.ac.uk/impactofsocialsciences/2018/03/20/what-factors-do-scientists-perceive-as-promoting-or-hindering-scientific-data-reuse/
Read COPE Digest newsletter for more advice and resources to support your ethical oversight policies and procedures, the case of the month 'Ethics of non-active management of a control group', details of our webinar on 'Creating and implementing research data policies', a Forum discussion document on Preprints and more.