Innovation in healthcare: How to make it happen?

This is an interesting leader written in Financial Times (and I had the chance to read and quote the original economics paper) linked in it.

Here’s a thought:

How to escape innovation’s Great Stagnation | Financial Times

The trouble is that while research inputs have been rising sharply, research productivity is dropping even faster. It now takes 18 times the number of researchers to achieve Moore’s law — that is, the doubling of computer chip power about every two years — than in the early 1970s.

TSMC, Taiwan has 5 nm processes as mainstream, and they are working on 2 nm processes by 2025. It took a cumulative 30+ years (and a strong focus on the application side) to push through these breakthroughs. I cannot reliably comment on “Moore’s law” and whether it will be broken. However, the innovation in hardware specifications has seen an incremental improvement on the software side, and I have written numerous times about AI being specialised to perform specific tasks alone, not “general-purpose-intelligence”. It will require considerable expertise to create legal and ethical frameworks to define AI applications, but with the epidemic of “liberalism” sweeping through the Western world, I have strong doubts that a politically surcharged environment is conducive to research. Besides, academia in its current state does not generate a “spirit of enquiry” but training “conformists”. If you digress from specific “frame of references”, the grading will suffer. If you have taken a loan to pay fees, I doubt if you’d engineer a protest to push “free-speech”. That’s the difference from earlier days when innovation and “research” happened in a mission mode, versus the duplicative “research” happening currently. I can reference the Manhattan project that engineered several breakthroughs with the creation of institutional frameworks and articulation of clear goals as “outputs”.

Genuine innovation requires collaborative research, and the author has pushed the idea of “remote collaboration”. The mumbo jumbo published in Nature has some contradictory statements, but then, everything under the sun was published during the “pandemic”.

Here’s the partial context:

Virtual communication curbs creative idea generation | Nature

In a laboratory study and a field experiment across five countries (in Europe, the Middle East and South Asia), we show that videoconferencing inhibits the production of creative ideas. By contrast, when it comes to selecting which idea to pursue, we find no evidence that videoconferencing groups are less effective (and preliminary evidence that they may be more effective) than in-person groups. Departing from previous theories that focus on how oral and written technologies limit the synchronicity and extent of information exchanged4,5,6, we find that our effects are driven by differences in the physical nature of videoconferencing and in-person interactions. Specifically, using eye-gaze and recall measures, as well as latent semantic analysis, we demonstrate that videoconferencing hampers idea generation because it focuses communicators on a screen, which prompts a narrower cognitive focus.

Our results suggest that virtual interaction comes with a cognitive cost for creative idea generation.

I don’t want to go in full text (and I am not drawing any inferences from abstract either). However, I remain unconvinced about the drawn conclusions. As any meeting (in person or virtual), if there is no specific agenda, it will dissipate the “cognitive focus” anyway.

Nevertheless, here’s something more that the author in FT write up has to say:

This rhymes with my own work with Giorgio Presidente and Chinchih Chen of the Oxford Martin School. Our analysis of what 10mn research teams published between 1961 and 2020 shows that a growing percentage of discoveries are what we might call brief squeals rather than the blockbuster narratives we need to create new avenues for progress.

The standard response — more spending on R&D — won’t solve the productivity problem. The urgent task is to improve the process that converts research work into scientific outputs.

My argument is that research innovation (especially in application science) should happen at the end-user level. Let’s say, for example, battery technology for transportation and storage. The success of lithium ion batteries depends on the “battery-management-software” that monitors batteries in key parameters. Why should the original team focus on the software? They should work on newer battery chemistries than create “tweaks” in the software. That aspect should be left to institutions closer to the end consumer by taking in continual feedback. Likewise, clinicians in healthcare should step into basic sciences by understanding the limitations of science and focus on improving applications. For example, radiobiology should be studied both in Petri-dishes and in patients through “surrogates” to create breakthroughs. Right now, there is little collaboration between two vertical niches.

Can we connect “innovation-engines” together to form “collective-brains”?

To be clear, it is not that face-to-face no longer matters. Rather, by connecting real-world networks, such as Silicon Valley and Tel Aviv, or Oxford and Zhongguancun, remote collaboration increases the innovating potential of what Michael Muthukrishna and Joseph Henrich have called the “collective brain”.

I don’t know if it’s feasible or if there is an economic rationale to it. There are different legal systems, for example. Different motivations to share research for a common-pool and fascination for “breakthroughs” worthy of a Nobel.

There’s some economic background to it, too:

Ideas aren’t running out, but they are getting more expensive to find | VOX, CEPR Policy Portal

In a new study, we show that the costs of extracting ideas have increased sharply over time (Bloom et al. 2017). In other words, the innovation bang for the R&D buck (or ‘research productivity’) has declined. In an accounting sense, therefore, low productivity growth in the economy is a direct consequence of research effort failing to increase fast enough to offset declining research productivity. If we want to restore economic growth, we need to pay for it.

The authors further explain:

It is harder to understand why ideas are getting harder to find. At one level, our work suggests that research is like any other input – there are likely to be diminishing returns. These days, pushing the frontier of knowledge out requires mastering an ever-larger body of knowledge, meaning that students have to stay longer in university, and researchers increasingly work in larger teams whose members are more specialised. This all pushes up costs

My submission is that the current academia itself is to blame. There’s significant gatekeeping, existing healthcare systems serve as profit centres for the “chosen-few” and barriers to healthcare workers are erected in the name of public safety. Countries with socialist systems are wedded to the idea of “life-time security” in terms of “pensions”, leaving little leeway for any innovative application of “research”. Those away from “academic centres” have little incentive to pursue “pure research” because it interferes with patient care. There are numerous examples and scenarios.

Hence, a breakthrough innovation can happen if we understand the barriers to “application-science”. I have invested significant time to understand the process of these frameworks, and have seen the situation on ground from either side. Clinicians must learn to delegate tasks to subordinates and get used to the higher order of thinking. It means engaging in leadership, administration, policies and shaping specific guidelines by conducting trial studies. This requires their supporting institutions to engage in specific structured work-flows and provide means to push these ideas forward. Basic scientists can then engage with these clinicians to define focus areas of research. For example, while the DNA mapping has concluded, its benefits will take at least a decade to flow to the public. You can easily speed up by training physicians in their application, understanding the specific contexts, and making cheaper reagents to conduct those tests. The research should focus on the evaluation of these discoveries in potential application in diseases and searching for new cures.

2 thoughts on “Innovation in healthcare: How to make it happen?

  1. Hey, great read! I particularly enjoyed your in-depth discussion of the importance of collaborative research, since it was something I hadn’t really thought of before. Being a fellow blogger myself, I also really appreciate how organized and well-formatted everything was – it definitely made the content much more digestible overall. Keep up the awesome work!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.