Scientific stagnation- part II

Medical breakthroughs are becoming rarer unless you want to believe the next hype cycle in immunotherapy for cancer cure (and some eager testimonials). It works for some and for a vast majority, recurrences (and standard therapies) are far and few in-between. My senior peers (and colleagues) are inventing innovative ideas around survival without pushing for “best supportive care”. Patient stories can become more tolerable if we weave in realistic estimates on how best to approach disease progression.

The fact is that we have picked up the “low hanging” fruit. Radiobiology research is more “curiosity driven” but fundamental research where grants have become fewer because they don’t offer significant marketing chutzpah. Can you imagine a headline screaming about the successor to LQ model? If it fits “mathematically” for any scenario, it works. the marketing departments require returns on investment and that has become the bane of industrialization of medicine.

Here’s an interesting thought:

We need to challenge the conventional peer-reviewed research paper, by which I refer to a publication was reviewed by 2 to 5 peers before getting published. It is a relatively recent innovation that may not always be for the best. People like Einstein did not go through this process, at least not in their early years. Research used to be more more like “blogging”. You would write up your ideas and share them. People could read them and criticize them. This communication process can be done with different means: some researchers broadcast their research meetings online.

Here’s something more:

The other related problem is the incestious relationship between researchers and assessment. Is the work on theory X important? “Let us ask people who work on theory X”. No. You have to have customers, users, people who have incentives to provide honest assessments. A customer is someone who uses your research in an objective way. If you design a mathematical theory or a machine-learning algorithm and an investment banker relies on it, they are your customer (whether they are paying you or not). If it fails, they will stop using it.
It seems like the peer-review research papers establish this kind of customer-vendor relationship where you get a frank assessment. Unfortunately, it fails as you scale it up. The customers of the research paper are the independent readers, that is true, but they are the readers who have their own motivations.

This is exactly what I had been saying about the practicality of the research. Unless it has clinical implications, whats the value of assessment?

Once upon a time, I came across an interesting argument (and a conference paper) on the MRI assessment for whole body scan, instead of FDG-PET. Although the data was compelling, it was not tested in a formal clinical setting (or validated). Still today, I see inane arguments against FDG-PET for carcinoma cervix, even though the “researchers” are deluding themselves to believe that the “global south” somehow cannot afford the recommendations. These ideas (in glitzy conferences) are far from practical. They are designed for the “razz” or to induce a “wow” factor.

Ultimately, there is an increasing clamor to look beyond academic publishing and encourage broad-base and an improved set of ideas.

In conclusion, I do find it credible that science might be facing a sort of systemic stagnation brought forth by a set of poorly aligned incentives. The peer-reviewed paper accepted at a good venue as the ultimate metric seems to be at the core of the problem. Further, the whole web of assessment in modern science often seems broken. It seems that, on an individual basis, researchers ought to adopt the following principles:
Seek objective feedback regarding the quality of your own work using “customers”: people who would tell you frankly if your work was not good. Do not mislead citations or “peer review” for such an assessment.
 When assessing another research, try your best to behave as a customer who has some distance from the research. Do not count inputs and outputs as a quality metric. Nobody would describe Stephen King as a great writer because he published many books. If you are telling me that Mr Smith is a great researcher, then you should be able to tell me about the research and why it is important.