AI Research: Importance of lateral integration to break “tunnel vision”

Add more zing to ideas!

Healthcare innovation has stalled. If you are excited for the next “big thing” in the conference, it is because you want to break the monotony of being locked up at home for the better part of 2020 due to an ongoing pandemic. I am sure everyone will speak about how “awesome” it is to be back to what was the “norm” and “zoom fatigue”. If you have signed up for the conference emails – you’d get notifications about the “late-breaking abstracts” and you wonder why those “deadlines” apply to you alone.

Nevertheless, it is critical to understand that innovation is an ongoing process and conferences are a showcase. It is about public perceptions to be taken as a “thought leader” and “leaving the legacy of research” with a portrait hung in the hallway. These are socio-cultural issues though and I can’t claim that I don’t suffer from the “perceptional envy”. I do, but it doesn’t drive forth my motivation. It only encourages me to understand integrative aspects of research in a better way.

For the better part of 2019-2020, I have explored several aspects around deployment of technology in healthcare. It ranges from enterprise use case scenarios, getting better data-driven decisions, using chatbots with natural language p[rocessing, developing mobile applications, dabbling in wearables and understanding conceptual ideas around ecosystems, policy driven ideas around the constant tussle of regulatory frameworks, commercial exploitation of data, and much more. Healthcare privacy, often to the detriment of end-users, remains an issue of topical concern because algorithms are operant “black-boxes”. Therefore, I have explored several issues around open-source domains and technology. I have also written (and understood) at length about breakthroughs in academia, gatekeeping and risk-averse committees. I have also spoken out against the blatant use of PR to the detriment of consumer behaviour, and instead advocated a community building approach; especially to break open the silos of “discrimination”. I remain a strong advocate of public funding of science, and eventually, benefits should accrue to the public. These are the broad domains which this blog aims to explore in a significant manner.

time lapse photography of waterfalls during sunset
Photo by Pixabay on Pexels.com

As an oncologist, it beholds a specialisation is critical to address systems biology. The pace of “medical literature” (even if duplicated) is expounding at a rate which is tiresome to address for an individual. As such, while the fear of missing out is genuine, much research is required to ascertain how it translates in the “real world”, rather than an “optimal trial setting”. It requires a measured approach to its application (for example radiation dosages). As such, we have only evolved from erstwhile empirical dosages to something which doesn’t cause overt harm. Therefore, the above mentioned issues related to AI in healthcare deal much with the “delivery of healthcare” than addressing what’s at the “core”. Human intervention can never be ruled out and AI can be utilised to determine repetitive tasks. I am sure my western counterparts will agree to have something which transcribes automatically. (Wait, GPT-3 is already here!)

So how to measure and chart “factual-progress” beyond the scope of “conferences” (which in my opinion represents a beauty pageant and is awarded to someone who will work for underprivileged brethren). For AI, it will be taking a stab at what I refer to the core of medical practice – the doctor-patient relationship. AI builds up factual knowledge for the patient; makes them aware of the treatment journey; while medical practitioners address the “human-component”. At the risk of being a techno-optimist, I think the “futurism” is already playing out in some respects.

black sand dunes
Photo by Adrien Olichon on Pexels.com

Therefore, it is critical to address the “integrative approach”. It is a moniker to argue about the algorithms offering the best “return on investment” but instead use a combination to address failures of first with the second. Likewise, it is crucial to have someone with ideas in the above-mentioned domains to push the field, because it requires several collaborators working on the “singular idea of truth”. What better example than Radiation Oncology, which works in the fault range of tolerances and statistical uncertainties? We are still measuring five-year outcomes, whereas we need to evolve to getting quality of life in the centre-stage. AI can take a shot at that. Thats a real possibility.

I have always been fascinated with technology. I remember Nokia used to have specific experts to understand how end-users are using their devices. Those were the days of real-innovation, and I recall being excited about their new models. BlackBerry went down almost a similar path by developing solutions that addressed the real-needs of the users (though BlackBeryr 10 finally met it’s Waterloo-pun intended). The underlying science was “anthropology”.

Here’s what Gillian Tett writes in Financial Times:

This is not just a tale about tech, however. Far from it. The real issue at stake is tunnel vision. Today most professions encourage their adherents to adopt intellectual tools that are at best neatly bound or at worst one-dimensional. Economic models, by definition, are defined by their inputs, and everything else is deemed an “externality” (which was how climate change issues used to be perceived). Corporate accountants are trained to relegate things not directly linked to profits and losses (such as gender ratios) into the footnotes of company accounts. Political pollsters or consumer surveys often operate with pre-determined questions….

For at the heart of this endeavour is a basic truth: even in a digitised world, humans are not robots, but gloriously contradictory, complex, multi-layered beings, who come with a dazzling variety of cultures. We cannot afford to ignore this diversity, even after a year in which we have been cloistered in our own homes and social tribes; least of all given the fact that global connections leave us all inadvertently exposed to each other. So in a world shaped by one AI, artificial intelligence, we need a second AI, too — anthropology intelligence.

old stone hut located on snowy mountainous terrain
Photo by Ave Calvar Martinez on Pexels.com

It’s fascinating if you look at the preceding discussion from this lens. It begs the familiar question- how culturally diverse are the conferences? Data from developing countries is likely to be completely at dissonance from what’s being presented elsewhere. If the conference is being billed as a “platform for global challenges” and you have familiar faces from the same verandah, you do the maths. The fixation on global while being firmly rooted in the local offers little solace. The best ideas are often formed in distant lands while you escape the tunnel vision and develop a lateral view point.

Here’s another gem from Gillian:

Indeed, one way to interpret the rise of environment, social and corporate governance and “stakeholderism” is that many corporate leaders recognise the need for a wider lens. The pandemic has also shown us that in a globalised world it is dangerous to ignore or deride other cultures when we are all so tightly entwined. We need more empathy for strangers to survive and thrive.

I’d daresay there are enough parallels lurking. Recently, about three weeks ago, I had faced a rejection from a “peer-reviewed” journal, and I had meticulously dissected the reasons why AI (or ML) isn’t effecting a significant headway in the healthcare. The anonymous reviewer was decent, but his criteria to summarily reject the submission had no bearing on my line of argument. I can only surmise – you need an academic “flywheel” to join the system where they would encourage submissions and understand the process of getting them accepted.

person couple relaxation summer
Photo by Mikhail Nilov on Pexels.com

I intend to take a stab and still remain optimistic that superlative “practical ideas” will find a room for acceptance. The finest minds on the planet are working on “driverless” cars without much to show for – problems of equity, practical application, and breaking down silos is legitimate. By having a universal system that works in all different environments, is more crucial than drones attempting to deliver in tony areas of human habitation. Trust me, those mapping solutions with precise location and behavioural nudges through notifications encouraging doom scrolling are significantly less important than understanding historical perspectives of socio-cultural determinants of health (using the same precise location) and using the same nudges for behavioural change for preventive healthcare.

It is then when the lateral integration (and AI) would have actually succeeded.

One thought on “AI Research: Importance of lateral integration to break “tunnel vision”

Comments are closed.