I stumbled on this interesting journal article:
Using Web of Science data, one can find that China had overtaken the USA in the relative participation in the top-1% (PP-top1%) in 2019, after outcompeting the EU on this indicator in 2015. However, this finding contrasts with repeated reports of Western agencies that the quality of China’s output in science is lagging other advanced nations, even as it has caught up in numbers of articles. The difference between the results presented here and the previous results depends mainly upon field normalizations, which classify source journals by discipline. Average citation rates of these subsets are commonly used as a baseline so that one can compare among disciplines. However, the expected value of the top-1% of a sample of N papers is N / 100, ceteris paribus. Using the average citation rates as expected values, errors are introduced by (1) using the mean of highly skewed distributions and (2) a specious precision in the delineations of the subsets.
A little digging:
These evaluation systems have also led to the proliferation of research malpractice, including plagiarism, nepotism, misrepresentation and falsification of records, bribery, conspiracy and collusion. While these problems are not unique to China, the central government’s requirement that institutions commit to clear-cut targets for positions in major global ranking systems such as QS or Times Higher Education within a stated time period, mainly by publishing articles in indexed journals, sets China apart. Because institutions and individual researchers stand to benefit greatly from elevating their reputations, no severe punishments have been imposed for academic corruption and malpractice, compared to the US and Japan, although reforms imposing stronger sanctions were announced in May.
The West has assumed an air of “moral superiority” around what transpires as an accepted form of “scientific research”. There is an overreliance on survey data or indices, which makes the discussion “fungible” around the metrics.
Here’s a major fraud that was unearthed, and this is just the tip of the iceberg:
“I’m more worried about what this news might do to the public’s perception of science than to our ability to make progress against this disease,” he says. The long delay in uncovering the alleged fakery isn’t ideal, and shows the importance of scientists speaking up and publishing results even when their experiments fail to prove another team’s claim.”
This kind of publishing of “negative results” – papers that don’t give good news about a potentially promising idea – is not always encouraged, because scientists have more reason to leave those results on the shelf and spend time writing papers about things that do work.
Heads won’t roll. Journal editors won’t resign. NEJM won’t publish retractions or “choose to improve reviewer process”. Therefore, most publishing happens at the “lower level” without an iota of review process (or checking integrity of drawn conclusions) or “high-profile” citation rings that damage the credibility of science.
However, China is definitely improving its practical applications in Quantum Computing (and hold your breadth)- AI. The closest example of a consumer grade AI is “TikTok”, that has the best personalisation engine on the planet.
While there has been a race towards an “ARPA-model” for everything under the sun, it isn’t easy to replicate. There’s even a book for that. (External link opening up in Amazon). I haven’t read it so far, but it appears promising.
Scientists who have studied the DARPA model say it works if applied properly, and to the right, ‘ARPA-able’ problems. But replicating DARPA’s recipe isn’t easy. It requires the managers who build and run an agency’s grant programmes to have the freedom to assemble research teams and pursue risky ideas in promising fields that have typically been neglected by conventional industrial research and development programmes. Critics aren’t yet sure how ARPA-H, ARPA-C and ARIA will fare.
I am planning to follow this up with a different blog post.