AI Moonshots- Primed for failure

Jefferey Funk and Gary Smith write:

These failed goals cost money. After being promised $1.3 billion in funding from the European Union, Markram’s Human Brain Project crashed in 2015. In 2016, the market research firm PwC predicted that GDP would be 14 percent or $15.7 trillion higher in 2030 because of A.I. products and services. They weren’t alone. McKinsey, Accenture, and Forrester also forecast similar figures by 2030, with Forrester in 2016 predicting $1.2 trillion in 2020. Four years later, in 2020, Forrester reported that the A.I. market was only $17 billion. It now projects the market to reach $37 billion by 2025.


I completely agree with them. I have been mentioning here that the commercial applications of AI pushed out by Google/IBM Watson and a rash of others are mostly hype. It could be one way for testing waters but the authors have also missed the larger goal of faultlines in the academia.

AI is hotly contested and is awash with funds. If it were a publicly funded model wherein each one of them got money based on the impact of their research (beyond the metrics) and creation of infrastructure/ecosystem, then it will be worthwhile. However, most universities engage in vanity projects, marketing and getting free “cloud compute time”. Why not? There’s nothing to be lost because no one questions them.

The authors write further:

The health care moonshot has also disappointed. Swayed by IBM’s Watson boasts, McKinsey predicted a 30–50 percent productivity improvement for nurses, a 5–9 percent reduction in health care costs, and health care savings in developed countries equal to up to 2 percent of GDP. The Wall Street Journal published a cautionary article in 2017, and soon others were questioning the hype. A 2019 article in IEEE Spectrum concluded that Watson had “overpromised and underdelivered.” Soon afterward, IBM pulled Watson from drug discovery, and media enthusiasm waned as bad news about A.I. health care accumulated. For example, a 2020 Mayo Clinic and Harvard survey of clinical staff who were using A.I.-based clinical decision support to improve glycemic control in patients with diabetes gave the program a median score of 11 on a scale of 0 to 100, with only 14 percent saying that they would recommend the system to other clinics.

I am not a huge fan of surveys. They have no meaning except for providing a “bell-weather” for what people “perceive at the moment”. Sometimes, it appears that the entire staff has enough time for filling out the survey forms that litter the inbox. However, they are published as a moment of truth. Nevertheless, despite the overwhelming depressing tone of the linked article, they have a substantial point – hype raises unnecessary expectations. Most of the VC’s I have come across often speak in private about “replacing doctors”. I wouldn’t be surprised.

Interesting times.