Van Rensselaer Potter coined the word “bioethics” in the early 1970’s at the University of Winconsin – it connotes the “integration of biology and moral values for human survival”. It coincided with the socio-cultural and political movements of that era in the US (and out of context for discussion here). Indeed, there was an emerging awareness and recognition of individual autonomy, which in part was motivated by media frenzy surrounding ethical violations of human clinical trials. It culminated into Belmont Report 1979, published by the National Commission for the Protection of Human Subjects of Biomedical and Behavioural Research, which outlined “respect for persons” as a core value.
The rise of Artificial Intelligence (AL) and Machine Learning, especially in the healthcare mandates, a review at the specific provisions and placing “primum non nocere” (First Do No Harm) again in that context. The technological crossroads with the healthcare are complex, nuanced, and challenges multitudes of assumptions around treatment decisions and implied consent. It requires considerable interdisciplinary work in legal torts, medicine, ethics and technological domains, with specifying reciprocity in concepts of justice and fairness.
While researching for this write up, I came across a fascinating account of how “big-tobacco” debate was shaped up through funding and setting up of “research organisations” acting through proxies. The associated public relations exercise sowed “seeds of doubt” to muddle public discourse and opinion around it’s products. It was literally “smokes and mirrors“. Science is probabilistic, and in contrast, human assumptions and belief systems are tied in absolute causality. For example, you wear seat belts to increase the “chances of survival” as an absolute statement, but meeting with a fatal car crash is probabilistic. AI and ML require increased computational resources to determine meaningful outcomes through specific parameters, and many companies have increased their stake in success. Therefore, it is witnessing a similar PR exercise in mainstream media – narrative spins around either a dystopian view or is being heralded as the “next revolution”. Mainstream “bloggers” call it the “web 3.0” moment, with many hyperbolic optimistic projections about its achievements. Their end goal through over-optimistic projections helps them garner public attention and increase their chances of scarce funding. The debate over its utilitarian goals (and importance) in health care remains murky.
I call it the “big-tobacco redux”.
Decision-making is a complex affair based on externalities. These become increasingly complicated during the course of cancer treatment; especially after recurrences, disease progression and “end-of-life” care, which can present many moral quandaries for patients, caregivers and their physicians. Traditional medicine has witnessed several rational arguments that have faced legal and sociological debates about the withdrawal of life support (for example). However, the debate around AI and ML in healthcare has not even begun in the right earnest (unless you count “think-tanks” that don’t even have a physician on-board). Similarly, ethics around targeted therapies (despite their limited efficacy and high societal costs) have not been explored with academic rigour, despite their dubious cost-benefit analysis in contemporary literature. It begets this important but unanswered question – is overtreatment doing more “harm” than good?
These problems will be further reinforced when imaging (for example) shows a disease progression, and a decision is made to consider treatment options based on either biopsy or limited genetic sequencing. Technically, biopsy only represents a small part of the heterogeneous tumour sample) and it would be difficult to extrapolate its findings on the global profile of tumour. In this scenario, AI-based clinical decision support systems are likely to flounder and increase the costs of intervention – while there could be clear cases of pushing for “palliative” or “best supportive care” under the present “consensus guidelines”.
The pace of technological innovations in data collection (minus recall bias), wearables, the Internet of Things (IoT) and the use of clinical and genomic data through health platforms is rapidly evolving. Data privacy and localisation have become a cornerstone for most countries. However, this data plumbing is technically equivalent of a black-hole for end users, as it requires niche specialisation to understand how different data streams interact to give a cohesive semblance for actionable data. What relative weights have been assigned to say genomic data to meld with the mutliparametric imaging, for example. Rigorous analysis and standardization of genomic analysis / image acquisition are required to be reproducible. It requires a leap of faith to “trust” computer algorithms with the end goal of “personalised medicine”.
Medico-legal requirements for “appropriation of responsibility” require a continuous evaluation of supernumerary audits and a higher cost of treatment to meet “ethical requirements.” Most contemporary literature raises the heckles around pervasive data collection and some overblown “surveillance claims” – even though data pools remain aggregated with certain entities (either public or private), which require further clarification in law for access controls.
The future course of medicine in treatment of diseases will be heavily influenced by prevailing socio-cultural attitudes, religious beliefs, dissemination of proper science communication, building up research protocols, a robust legal system and an inclusive debate involving all sections that will eventually intersect at a grand narrative.
Bioethics is an exciting domain to explore in the technological domain, because while there can be no absolute deterministic outcomes, it ensures that societal harms are minimised.