I had written about “status-bias” in research.
What is it? You may ask.
In that situation, yes-or-no calls can end up being made on very fine margins, with the difference sometimes boiling down to biases. “Panel members might say, ‘Oh, she’s had grant money before so she must be good’, or, ‘He’s at Oxford so he must be brilliant’,” Wilsdon says, revealing it can take just one sceptic to scupper a proposal’s chances.
Status biases are nothing by “market signals”. This explains why it is incredibly difficult to “breakthrough” in academia (even if you have specific ideas in pushing through research) or provide a directional idea.
Making research funding a lottery could help tackle ‘status bias’ | Financial Times
One might argue that “status bias”, also called the Matthew effect, makes for a reasonable short-cut in decision-making, given that prizes are one benchmark of quality. But findings like these chime with persistent concerns that established names and institutions are unfairly crowding out newer research talent when it comes to publishing papers and winning grants.
What is lacking in this discourse is the oversized importance of the “committees”. These are formed because they provide for a “collective responsibility” or a “blame” (swim or sink together). Hence, in politically surcharged environments where funds are disbursed, these form enough safety valves and a collective search for the “market signals”. Peer review is only a veneer.
What is the alternative? Definitely, not a random number generator. First, identify if there is ANY need for research in the first place. This is highly generalised and not specific to the argument. The “idea of research” is to identify something novel that will be useful to society. All the Nobel Prizes in economics, for example, haven’t been able to prevent the mess in western economies, for example. They are heavily dependent on debt borrowing. Healthcare metrics haven’t “improved” despite the outsize spending in the US in proportion to their GDP. Nothing fruitful comes out of Canada, for example, despite their “leadership” position in “stereotactic” XRT, barring many scenarios where this might have just been glanced over. (I don’t intend to target anyone, but questioning the validity of research). How has stereotactic XRT, for example (or even modulation), really helped to impact the “survival” and not “quality”, which is a qualitative experience (coloured by biases).
Therefore, while “peer-review” might just act as a filter and random number generator as luck, we might just play the lottery. There’s a distinct possibility of good ideas being sidelined.