Doing better research

As we hurtle towards the post pandemic crisis (and wilfully ignoring the torrential publications on covid practises in departments), it was apparent that we have too much of the same thing. The painful realisation of hypofractionated treatments came to the fore (in the absence of prior radiobiological basis) and hence pushing towards whatever “quality of evidence”.

I stumbled on this master editorial on BMJ that made it apparent in 1994. We are doing “too much of research”- with poor quality of trials. I will selectively quote (in the same tone as had been covered earlier throughout the blog. Emphasis is mine, throughout.

When I tell friends outside medicine that many papers published in medical journals are misleading because of methodological weaknesses they are rightly shocked. Huge sums of money are spent annually on research that is seriously flawed through the use of inappropriate designs, unrepresentative samples, small samples, incorrect methods of analysis, and faulty interpretation

In Oncology, it becomes even more problematic because the end results (including the quality of life issues) are difficult to predict (much less quantified). Therefore, it is imperative that we focus on the methodologies. We expect systematic biases irrespective of the careful consideration at the trial stage.

I am not getting into the specific reasons for errors or “publish or perish” culture.

Here’s something interesting:

Why are errors so common? Put simply, much poor research arises because researchers feel compelled for career reasons to carry out research that they are ill equipped to perform, and nobody stops them. Regardless of whether a doctor intends to pursue a career in research, he or she is usually expected to carry out some research with the aim of publishing several papers.

The length of a list of publications is a dubious indicator of ability to do good research; its relevance to the ability to be a good doctor is even more obscure.

In the current scenario, I see Twitter battles based on survival curves- do they even matter?

A little while back, I was dealing with an interesting case of metastatic prostate cancer. While he had oligometastases (nodal and osseus), he had responded well to hormonal therapy and chemotherapy. Logic says that he did not require any treatment with his well controlled PSA. However, the STAMPEDE trial showed otherwise. Whom do I believe? A shitty trial? Or my common sense?

Yet it has become a “standard of care”.

This is just one instance. There are a lot of things we still don’t know. A better research entails definitive end points and not repeating the same mistakes as others. Medical research is often shoddy and of poor quality.

As the system encourages poor research it is the system that should be changed. We need less research, better research, and research done for the right reasons. Abandoning using the number of publications as a measure of ability would be a start.

I concur here.