Consider the following “news” first:
- Open AI’s sentence generation algorithm
- Solving Rubik’s cube
- Neural net that solves problems 100 million times faster
Now consider the critiques (that never got any prominence than the “fake news”) (links in the same order, as above)
- Correction on the GPT-2 system
- Original link of Rubik cube “solution” (1992)
- A detailed review of link 3 in another publication.
I am not knocking the genuine advances made in the AI but opposing the influx of marketing that precedes any substantial advancement. It extends to the various universities seeking out public funds. For every “success”, for example, there are numerous failures, and even then the success is only relative.
The funny thing with statistics is that if you wring its neck, it will yield a correlation- the tighter you do, the stronger it is!
A historical perspective would be appropriate here. MIT’s AI lab in 1966 had proposed an artificial general intelligence as a system of “artificial consciousness”. We haven’t solved this issue, yet. I am not holding my breath if 2020 doesn’t prove to be the year of reckoning for AI. Likewise, Google promised “deep learning” in 2015, but instead has become a symbol of corporate surveillance. Subsequently, they also predicted the “death of radiology” by pushing the idea of “visual intelligence”, and it’s promise of identifying “early tumours”. Inherent to these issues was the missing discussion of excessive radiation dose received by asymptomatic individuals (even for CT scan screening) over and above the applicable limits. Who will bear the costs anyway?
HBR went even further by not also doing the basics of journalism- due diligence. It published an account from Andrew Ng, who claimed that AI could automate mental tasks. Much to his chagrin, it has happened (there’s an interesting discussion here.) However, at the time of writing, Harvard Business Review hasn’t recanted its publication.
Another notable example is IBM Watson. Someone had asked me to find out if the promises were indeed true as claimed. It took me one weekend to show that it was overclaiming its benefits and another meeting with the lead researcher to prove that it is a hoax. (It is a strong word to use, but I digress with their marketing). There is nothing “artificial” about it but requires entering a lot of input parameters, and it only fetches out the articles from the database. It is like a glorified “NCCN workflow” which is available as a free downloadable PDF, albeit costly acquisition. The last time I saw their pitiful attempts to scan textual writeups, I had given up on them. Luckily, only a few departments “fell” for their ruse only because of the steep costs. They are now justifying its purchase by having them in the tumour boards. A leading hospital (I won’t name them) even offers “consultation” with a specialist AND Dr Watson!
Chatbots, likewise, is yet another shining example of a promise that never got around. Luckily, I was able to desist an institution from even entertaining the thought around it. I had earlier designed and implemented a bot on Telegram chat application that required a lot of user input and gave me a fair idea on the inner working. Before that, I had worked with another startup, in the same space, and realised its limitations. Hooking up a conversational AI interface is fraught with several problems, notably, the training needed to generate data sets, computational power and its lack of context to the conversation and its replies. Even as a standalone application, a chatbot is a costly proposition on an ongoing basis (hosting and training the datasets).
It is a sure-shot way of failing expensively!

We are, in no way, replacing the radiologists with the beaming computer that works tirelessly. Currently, radiology departments remain grossly understaffed (and so does radiation oncology!)
Jokes apart, we need ethical AI and stamping out the influx of natural stupidity (read: marketers). We should also question the veracity of claims, even if they are coming in from “reputed magazines”. The AI researchers and scientists need to be more transparent on their communication and oncologists rushing in should develop a healthy scepticism towards its promises. By stripping away the rhetoric, they need to put their models in open domains (which would also help others to improve upon their robustness on say, more massive datasets).
The march of technology is inevitable, but there have been voices of reason pointing towards the AI Winter that would inevitably result from the disdain of those allocating public money. Collectively, we need to safeguard an exciting development of AI that does have the potential to deliver a change (but not yet). We don’t need a “revolution” but embrace slow incremental changes around technology as it happens that would help us to pivot towards its deliverable promises.