Measuring the value of “scientific research”

The golden question is: What are you researching? What ideas do you wish to generate (or leading hypothesis) to have a measurable impact? How does the money (funding) align with your directional research? What is the quantifiable end goal?

This and more in this fascinating blog:

Do Academic Citations Measure the Impact of New Ideas?

If science is working well, then we would hope the only difference between these two is time. Optimistically, good ideas get recognized, people learn about them, incorporate the insights into their own research trajectories, and then cite them. In that case, potential value is basically the same thing as the actual impact if you let enough time pass.

But we have a lot of reasons not to be optimistic. Maybe important new ideas face barriers to getting published and disseminated, because of conservatism in science, or because of bias and discrimination. Or, if those obstacles can be surmounted, maybe there are barriers to changing research directions that prevent scientists from following up on the most interesting new ideas and allowing them to reach their potential. Or maybe low potential ideas garner all the attention because the discoverers are influential in the field.

My grouse is that there are not enough ideas to go around; especially in the combinatorial sciences. For example, a discovery of the new cancer pathway through any means necessary isn’t fundamentally going to alter the outcomes, because it doesn’t address the “biology” or the “overall survival”. Finding yet another mab or a blocker won’t change the directional outcomes either. Combination with radiation doesn’t inspire much hope, because you need to explore the fundamental radiobiology and altered fractionation schemes to achieve the abscopal effects (if any).

The real problem with the citations:

But it’s also possible that citations don’t even reflect actual impact. This would be the case, for example, if citations don’t really reflect acknowledgements of intellectual influence. Maybe people don’t read the stuff they cite; maybe they feel pressured to add citations to curry favor with the right gatekeepers; maybe they just add meaningless citations to make their ideas seem important; maybe they try to pass off other people’s ideas as their own, without citation. If these practices are widespread, then citations may not reflect much of anything at all.

These are, of course, the unstated truths. Random citations or even publishing require careful assessments, which doesn’t really pass the “peer review”. It doesn’t offer a directional view of scientific progress. There are numerous rehashed versions of the articles being published, with anything of import to proclaim “zero redundancies” and not choke off the funding streams. Anything of “value” (basically, low hanging fruit) has been plucked multiple times and rehashed across various “journals” surviving on the “APC” or the dreaded article processing charges.

Another interesting graph:

Notice the overall data line says “with citer effects.” That’s intended to control for the possibility that there might be systematic differences among respondents. Maybe the kind of researchers who lazily cite top-cited work are also the kind of people who lazily answer surveys and just say “sure, major influence.” But Teplitsky and coauthors survey is cleverly designed so they can separate out any potential differences among the kind of people who cite highly cited work versus those who do not: they can look at the probability the same person rates a paper as more influential than another if it also has more citations. Overall, when you additionally try to control for other features of papers, so that you are comparing papers papers the survey respondent knows equally well (or poorly), the probability they will rate a paper as influential goes up by 34 percentage points for every 10-fold increase in citations.

This is an interesting takeaway from the linked quote. Are most cited papers really “influential”? This also ignores the secular trend of paper propagation, especially retractions, which are less likely to figure in these surveys. I may recall some paper published with some interesting findings (or possibly major “practice changing randomised trials”), but without accounting for specific outliers or even a complete assessment of the statistical flaws. One that immediately recalls Bonner’s trial on cetuximab, which overstated the effect of the “biological therapies”, while the sub-studies showed the benefit that accrued from altered fractionation instead. Yet, this was hailed as practice changing. Likewise, for the famed START trials in early breast cancer, which has an abnormally high (I think around 10%) breast tumour recurrence, and no one bats an eyelid that recurrence rates are abnormally high to detect. Unless they have an iron clad mammographic screening and reliable patients for follow up. Recurrences are more messier to manage; while the MRM rates can virtually eliminate the risks. Yet, they persist.

Citations (and therefore) conclusions drawn in the “guidelines” require a higher proportion of “negative studies” instead. If everything is rosy on the outside, readers will be unaware of the pitfalls. Practice changing studies represent all rainbows and unicorns, a veritable fool’s paradise.

I’d like to add a few more pointers:

First, whereas we might be worried that academics cite work they are not engaging in to curry favor with the right academic gatekeepers, this doesn’t seem to be as relevant for non-academic citations. Second, as discussed in the post “Science is Good at Making Useful Knowledge,” patents that cite highly cited academic work also tend to be more valuable patents themselves. That’s not what we would expect if the citations are not actually bringing any value, in terms of new knowledge. Moreover, we have some scattered evidence for the commonsense view that science really does add value to technology.

That said, it may still be that highly cited work is often used as a form of rhetorical armor, to make arguments that are not really influenced by the citation seem like they have a stronger evidentiary backing than they do (see, e.g., Von Neumann 1936)

Food for thought, really.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.