This post was “inspired” by an editorial from Scientific American (and I am riding its coattails) because I needed someone to call out the broken process. The essay does make some generalisations, however. Yet, it is still relevant because we, as scientists (and clinicians) owe it to our patients who look up to us. We aren’t making miracles.
The marketing push that accompanies “late-breaking-abstracts” promises the moon with the purpose of grabbing attention. However, I’d be hardpressed to remember the “keystone” speech at the last conference I attended (I usually don’t participate in many because it is a waste of time).
I am not here to berate a process that some find useful for networking. I am only bemoaning the multiplicity of scientific research that seems to be leading nowhere. Reproducibility is the key here. All scientists are aware of the fact. I strongly feel that it has to do with the funding mechanism. A “contract” with an external agency often compels end users to “show results”. Science is painstakingly built on slow (very very slow) incremental processes. Sometimes, a breakthrough only happens when someone else has made progress, often in an unrelated domain.
Therefore, the politics of science has sullied its divine image. It has assumed a larger than life proportion. It is this what I call as “bad science” (or the science mated to dirty politics). People who abuse the process to “publish” (and often results that cannot be reproduced elsewhere) are to be blamed. Several copycat journals have sprung up in the process. While researching for a topic recently, I was appalled to see an article from a known associate that had repurposed and repackaged the same ideas in an obscure publication. It is likely to tick off the number of publications, but how does it improve the treatment process? Is there anything novel in the process? What are we trying to accomplish?
The author makes a coherent case from the example of bad loans (and the financial crisis of 2008 that nearly wiped off the banking industry). The bad actors were well insulated from the consequences of their actions, however. (emphasis mine all through)
So, let’s imagine what might happen if the rules of professional science evolved such that scientists were incentivized to publish as many papers as they could and if those who published many papers of poor scientific rigor were rewarded over those who published fewer papers of higher rigor? What would happen if scientists weren’t rewarded for the long-term reproducibility and rigor of their findings, but rather became a factory that produced and published highly exciting and innovative new discoveries, and then other scientists and companies spent resources on the follow up studies and took all the risk?
“Publishing factories” is the appropriate terminology. Long-time back, as a resident, I had objected to a noveau terminology coined by an “eminent radiation oncologist” by writing to the editor that their proposed conceptual idea rested on shaky ground. Both the editor and the “researcher” are no longer on the scene, but the consequences of their actions are still reverberating. Who assumes the risk for such senility? Patients? Why?
This is not an issue of scientific fraud or misconduct where scientists invent data or purposefully lie; the data are real and were really observed. However, the fiercely competitive environment leads to haste to publish and a larger number of less rigorous papers results. Careful and self-critical scientists who spend more time and resources to carry out more rigorous and careful studies may be promoted less often, receive fewer research resources and get less recognition for their work.
The funding agencies often dictate haste to publish. Yes, we need money to fix up the ideas and research, but there has to be a robust dialogue on how money needs to be arranged. Should it be publicly funded research? Should it be private enterprises? Should science encourage moonshots without having any idea on how the result might benefit the end-users?
Of course, the scientific publication is subjected to a high degree of quality control through the peer-review process, which despite the political and societal factors that are ineradicable parts of human interaction, is one of the “crown jewels” of scientific objectivity. However, this is changing. The very laudable goal of “open access journals” is to make sure that the public has free access to the scientific data that its tax dollars are used to generate.
However, open access journals charge the authors of articles a substantial fee to publish, to make up for the dollars lost from not requiring subscriptions. So, instead of making more money the more copies of the journal they sell, open access journals make more money as a function of how many articles they accept. Authors are willing to pay more to get their articles published in more prestigious journals. So, the more exciting the findings a journal publishes, the more references, the higher the impact the journal, the more submissions they get, the more money they make.
This is another bone of contention. It merits a deeper dive in another blogpost.
Unless and until leadership is taken at a structural and societal level to alter the incentive structure present, the current environment will continue to encourage and promote wasting of resources, squandering of research efforts and delaying of progress; such waste and delay is something that those suffering diseases for which we have inadequate therapy, and those suffering conditions for which we have inadequate technological remedies, can ill afford and should not be forced to endure.
These are exactly my thoughts. That’s one of the reasons why we don’t have a cure for cancer. The “breakthroughs” are marketed as the “next best thing”. It only perks up false hopes for those who are affected. Several universities are working on nearly similar pathways in the hope of discovering a blocker. I still hold- radiation is the most effective way to kill cancer cells. We need to focus, instead, on the means to reduce side effects by exploiting the radiobiology and novel fractionation schemes. We also need to understand the genetic mechanisms and tumour microenvironment alterations which happen in the radiation field. Further, as I had discussed, patient-reported outcomes are a better marker for treatment-related side effects than a pure-play physician assessment, unless there is a comprehensive checklist.
We won’t see the “cure” for cancer anytime soon. However, we need to temper science in a comprehensive translational domain that is easily reproduced without its associated hype. The system is broken, but as the author says, it is a starting point for us to pause, reflect and ponder.