The coronavirus pandemic has brought several home truths to the fore. Chief among them is the increasing incidence of “research fraud”. The linked article describes research fraud in the biomedical literature. However, there is enough evidence that most of the published articles would fall under “questionable research practises”.
What’s more, many commonplace research misbehaviors are categorized as questionable research practices (QRPs) rather than outright misconduct. That includes publication bias, where scholarly journals favorably publish positive results over negative ones; p-hacking, where a researcher plays around with data until they meet significance thresholds; and cherry-picking, or selective reporting of data. Other QRPs include HARKing — Hypothesizing After the Results are Known, where researchers search for trends in already collected data — and publishing the same study twice.It’s Time to Get Serious About Research Fraud
It is critical to be aware of “republishing” the same data twice and multiple times over. For example, I am surprised to read many published articles on Nasopharyngeal Carcinoma, and a single “researcher” from a renowned US institution has published a string of articles with a common recurring theme. Call it “bio-hacking” but of the worst kind. Whilst it is not an outright fraud but the sheer amount of duplicate publications on the same issue should warrant a strong reconsideration and hence the topical issue of what constitutes “healthcare innovation”.
Here’s another interesting write up from The Nature.
These are important but they overestimate the benefits of correcting scientists’ minds. We often forget that scientific knowledge is reliable not because scientists are more clever, objective or honest than other people, but because their claims are exposed to criticism and replication.
The key to protecting science, therefore, is to strengthen self-correction. Publication, peer-review and misconduct investigations should focus less on what scientists do, and more on what they communicate.
It would be impossible for anyone individual to keep a close watch on sheer volume of literature (and subsequent retractions).
Here’s a practical suggestion:
By focusing on reporting practices, the community would respect scientific autonomy but impose fairness. A scientist should be free to decide, for example, that ‘fishing’ for statistical significance is necessary. However, guidelines would require a list of every test used, allowing others to infer the risk of false positives.
Carefully crafted guidelines could make fabrication and plagiarism more difficult, by requiring the publication of verifiable details. And they could help to uncover questionable practices such as ghost authorship, exploiting subordinates, post hoc hypotheses or dropping outliers.
I came across an interesting discussion on Hacker News on the same issue. I took a screenshot of one comment posted below. Please note that I don’t endorse the opinion or take any claims for the authenticity of the claims as the forum is unmoderated. However, it is an opinion and I am trying to balance it up in the interests of “fairness”. Not all science is fraudulent, but it is essential to stomp out the individuals who indulge in it.
I hope that we look inwards to reflect on what is essential in the public domain and how basic research leads to an impact.