Make assumptions about research being fraud?

BMJ Opinion blog post:

As he described in a webinar last week, Ian Roberts, professor of epidemiology at the London School of Hygiene & Tropical Medicine, began to have doubts about the honest reporting of trials after a colleague asked if he knew that his systematic review showing the mannitol halved death from head injury was based on trials that had never happened. He didn’t, but he set about investigating the trials and confirmed that they hadn’t ever happened. They all had a lead author who purported to come from an institution that didn’t exist and who killed himself a few years later. The trials were all published in prestigious neurosurgery journals and had multiple co-authors. None of the co-authors had contributed patients to the trials, and some didn’t know that they were co-authors until after the trials were published. When Roberts contacted one of the journals the editor responded that “I wouldn’t trust the data.” Why, Roberts wondered, did he publish the trial? None of the trials have been retracted.

In a leghty but disturbing blog post, the author of this post has raised very valid issue- zombie trials where patient’s don’t even exist. It is easier to point fingers in the “emerging economies” but the author completely missed the academic fraud in “developed countries”.

What I find galling about the assumptions here are that Universities and private hospitals expend significant resources in marketing. One of a prominent radiation oncologist covers head and neck cancers as part of her practise from a prominent hospital in the US- she writes explicitly about a single subsite- her many “publications and books” appear ghost-written as if they have appeared from a content factory. Each of them seems to cite the previous work and endorse herself as a “superstar” faculty- though her approach to the subsite is questionable. Names don’t matter – academic publishing is endemic with fraud and commodification, that it is hard to discern what’s right and how statistics are spun to show some benefit.

Here’s something more:

We have long known that peer review is ineffective at detecting fraud, especially if the reviewers start, as most have until now, by assuming that the research is honestly reported. I remember being part of a panel in the 1990s investigating one of Britain’s most outrageous cases of fraud, when the statistical reviewer of the study told us that he had found multiple problems with the study and only hoped that it was better done than it was reported. We asked if had ever considered that the study might be fraudulent, and he told us that he hadn’t.

It is hard to keep track of publications; harder to keep track of retractions too.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.