It is a brilliant write up! Briefly, there are two fundamental approaches to AI.
- Connectionism- Look at historical data. Draw inferences from “patterns.”
- Symbolism- First look at this research proposal from 1958 to understand its context. It seeks to map concepts between words and numbers. For example, neural networks.
In isolation, none of the approaches helps to move the needle forward, and its adherents have been bitterly opposed to each other. Here’s to quote from the author:
In 1969, in response to early research on artificial neural networks, leading AI scholars Marvin Minsky and Seymour Papert published a landmark book called Perceptrons. The book set forth mathematical proofs that seemed to establish that neural networks were not capable of executing certain basic mathematical functions.
Perceptron’s impact was sweeping: the AI research community took the analysis as authoritative evidence that connectionist methods were an unproductive path forward in AI. As a consequence, neural networks all but disappeared from the AI research agenda for over a decade.
It is also a sad reflection of fundamental research- especially the “marketing” around it. Therefore, a key takeaway is that one shouldn’t cloud their judgement to sweeping predictions. It isn’t true if it is too good to be true!
Yet for all of its successes, deep learning has meaningful shortcomings. Connectionism is at heart a correlative methodology: it recognizes patterns in historical data and makes predictionsaccordingly, nothing more. Neural networks do not develop semantic models about their environment; they cannot reason or think abstractly; they do not have any meaningful understanding of their inputs and outputs. Because neural networks’ inner workings are not semantically grounded, they are inscrutable to humans.
Importantly, these failings correspond directly to symbolic AI’s defining characteristics: symbolic systems are human-readable and logic-based.
Once we establish that connectionism and symbolism are at loggerheads, present-day researchers are now working on hybrid theories.
Recognizing the promise of a hybrid approach, AI researchers around the world have begun to pursue research efforts that represent a reconciliation of connectionist and symbolic methods.
I am not sure how this will play out in healthcare. We have a little understanding of the complex in-vivo phenomenon, and I am not surprised if scientists have to use fiction with some fact to explain the molecular soup.
It is tempting to offload the entire problem to an algorithm and see “what-comes-of-it” approach to Oncology. However, this approach is dangerously naive. Unless we quantify real-world scenarios and then work backwards, it wouldn’t do anyone good.