I am linking two interesting write ups. One on public policy published in Financial Times and other on Medium by Prof Subash Kak. First, the Financial Times:
There is just one problem: the “marshmallow test” is one of a number of studies that has fallen prey to science’s “replication crisis”: studies in fields as wide-ranging as psychology, sociology and economics that fail to replicate when other researchers conduct them. A 2018 study found that the 1990 version had written a cheque its findings couldn’t cash, once you controlled for various factors. Successful replication is at the heart of successful science: to use a crude example, it’s not good enough to produce a vaccine that protects me from severe symptoms of coronavirus. The jab has to provide similar levels of protection across the population.
Public policy is a different ball game altogether-those are politically charged (for example healthcare delivery) and it becomes difficult to have optimal assessment and criteria. Sociological policies like free homes and toilets have even more difficult ROI to measure. There are no options around “A-B testing” and see what works and what doesn’t. Furthermore, it can be difficult to roll back a policy once it has been implemented and therefore requires “tweaks”; diluting the objectives it was initially implemented for. Budgetary support is not always forthcoming, and with limited funding there are many competitive priorities.
The author writes:
But this can create problems of its own: when a policy programme doesn’t work, it is often politically painful for a government to abandon it, so all that a “pilot scheme” really does is expose one part of the country to a bad policy slightly earlier than the rest.
There might be a clear lesson on learning from “other democracies”, but for years, most US citizens were praising the Cuban healthcare system; especially around the price differentials. Political will requires flattening out the “special doles and handouts” by favouring specific groups over others and implementing uniform laws and policies. It will reduce these A-B testing requirements to a manageable level.
Something from Prof Subash Kak now.
Marcia Angell, the author of The Truth About the Drug Companies: How They Deceive Us and What to Do About It put it this way: “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as editor of The New England Journal of Medicine.”
More recently, Richard Horton, editor of The Lancet, wrote that “The case against [medical] science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”
I am not as bitter as the proponents here, but I have personally witnessed “public and citation rings” and a cosy club that forms, excluding others to partake. This is done consciously to enhance their “prestige” and “access” and the quality of work remains questionable. I can’t name names, but my applications have been rejected or vetoed on flimsy grounds despite the lofty exhortations. It doesn’t matter when you take a long-term view.
Most of the clinical-research is not useful at all.
Useful clinical research procures a clinically relevant information gain : it adds to what we already know. This means that, first, we need to be aware of what we already know so that new information can be placed in context . Second, studies should be designed to provide sufficiently large amounts of evidence to ensure patients, clinicians, and decision makers can be confident about the magnitude and specifics of benefits and harms, and these studies should be judged based on clinical impact and their ability to change practice. Ideally, studies that are launched should be clinically useful regardless of their eventual results. If the findings of a study are expected to be clinically useful only if a particular result is obtained, there may be a pressure to either obtain that result or interpret the data as if the desired result has been obtained.
Imagine the biases dressed up!
I’d leave it at that – inherent biases cannot be generalised, and therefore, you need to weigh in your judgement each time. Does the published paper offer anything novel? Is there a new insight that we might have missed? Are we becoming wiser with the peer review and editorial conjunction and aligning with the “prevailing thought-process”? Are we trying to disrupt the stodgy medical profession?
Public policy, when based on flawed research, will inevitably be flawed. I don’t think that A-B testing will provide any meaningful outcomes, either. Therefore, it is required to analyse the import of the studies (beyond the PRISMA guidelines) and understand its context.