I stumbled on this link through yet another link. It appears like a plug but for once let’s not get distracted by hyperbole. The problem highlighted here is a severe problem.
I wouldn’t know why the author would do it for free.
Inaccurate data in scientific papers can result from honest error or intentional falsification. This study attempted to determine the percentage of published papers that contain inappropriate image duplication, a specific type of inaccurate data. The images from a total of 20,621 papers published in 40 scientific journals from 1995 to 2014 were visually screened. Overall, 3.8% of published papers contained problematic figures, with at least half exhibiting features suggestive of deliberate manipulation. The prevalence of papers with problematic images has risen markedly during the past decade.
Additional papers written by authors of papers with problematic images had an increased likelihood of containing problematic images as well. As this analysis focused only on one type of data, it is likely that the actual prevalence of inaccurate data in the published literature is higher. The marked variation in the frequency of problematic images among journals suggests that journal practices, such as prepublication image screening, influence the quality of the scientific literature.
One of the initial 800 papers she reported was by Min-Jean Yin, who led the Pfizer California cancer lab at the time of the paper’s publication. It contained duplicated images of western blots, a common test used to detect specific protein molecules. The images produced from the test are the data, so editing them essentially amounts to chopping and changing results to fit whatever hypothesis the scientist is trying to prove.
How do you really prove that results are “accurate” (that brings us issues related to reproducibility in science).
Likewise, algorithms that determine appropriate outcomes related to Xerostomia, for example, in head and neck cancer, are different from each other in arriving at the “global-minima”. Can we rely on single institutional studies (and as a corollary) patient-reported outcomes? These are questions that journals/editors and the broader scientific community need to ask.