I. AI- The revolution hasn’t happened as yet

mirror fragments on gray surface with the reflection of a person s arm
Photo by Thiago Matos on Pexels.com

This write up surfaced in one find from Hacker News (it had gained a lot of traction there) and is nearly three years old. However, the crux of the problem has been crystallised very clearly. What is AI? Is it for real? Why should we question it?

Before I get into the nitty gritty, I strongly believe in the process of creation. Why don’t you see the venture capitalists pushing money in “manufacturing”? It is because manufacturing is hard. You need land, labour regulations, administration and host of other complexities before you can churn out a finished product. However, the software world is easy. VC firms bet on several “projects”; one that gains an outsized prominence eclipses multiple failures. However, that requires a complete ecosystem- media, networking and the works. Those are more abstract principles. Therefore, you end up reading about number of startups and they have been pushed out as the “future of innovation”. I digress.

I am also fascinated by the world war II era (especially around 1939-46) when the foundational technologies were laid and there was the era of semiconductors, pure research and pushing through the boundaries of unknown to known. The light had truly shown then. Our gilded age has only made incremental improvements. The first cobalt machine to treat tumours has given way to modulated radiation therapy but currently with fancy terminologies like volumetric arcs and inspiratory breath holding- each layer of complexity only adds more uncertainties.

If we keep these two principles in mind, it would then make things easier to follow (or look at the events from this lens). We could refrain from making pompous claims about what technology can achieve. We can also learn to be humble about the limits of our craft and reinforce the ideas that hapless patients require human connection (more than the telemedicine or virtual presence). We can also stop and reflect on how hype cycles reinvent bubbles of “algorithms” and push through the narrative pushed out by the sleazy snake oil marketers.

This is a fascinating account here:

Consider the following story, which involves humans, computers, data and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to 1 in 20.” She further let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis. But amniocentesis was risky — the risk of killing the fetus during the procedure was roughly 1 in 300. Being a statistician, I determined to find out where these numbers were coming from. To cut a long story short, I discovered that a statistical analysis had been done a decade previously in the UK, where these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. But I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I went back to tell the geneticist that I believed that the white spots were likely false positives — that they were literally “white noise.” She said “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago; it’s when the new machine arrived.”

The author writes:

The problem had to do not just with data analysis per se, but with what database researchers call “provenance” — broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation? While a trained human might be able to work all of this out on a case-by-case basis, the issue was that of designing a planetary-scale medical system that could do this without the need for such detailed human oversight.

I have personally held that “open source” trials are the way forward. Long time back, I had invested effort and time to work on a “modular system” of electronic medical records that would integrate various signals in one uniform whole to provide a “singular” narrative. It did not materialise but the motivation was to push through real-time data; especially for the trials.

Here’s something important in the narrative and that’s the reason why I chose to keep this on for today:

I’m also a computer scientist, and it occurred to me that the principles needed to build planetary-scale inference-and-decision-making systems of this kind, blending computer science with statistics, and taking into account human utilities, were nowhere to be found in my education. And it occurred to me that the development of such principles — which will be needed not only in the medical domain but also in domains such as commerce, transportation and education — were at least as important as those of building AI systems that can dazzle us with their game-playing or sensorimotor skills.

If you see a Boston Dynamics robot dancing to a specific tune, it doesn’t mean the end of humanity- but represents only “showcase” of what they are working on- an incredible waste of talent and resources. Real world problems require transfer towards open source domains to push through possible care (and nursing assistants), rather than dancing in the labs. Likewise, the gimmicky AI coming from the land of coronavirus.

Here’s an interesting blurb:

Most of what is being called “AI” today, particularly in the public sphere, is what has been called “Machine Learning” (ML) for the past several decades. ML is an algorithmic field that blends ideas from statistics, computer science and many other disciplines (see below) to design algorithms that process data, make predictions and help make decisions. In terms of impact on the real world, ML is the real thing, and not just recently. Indeed, that ML would grow into massive industrial relevance was already clear in the early 1990s, and by the turn of the century forward-looking companies such as Amazon were already using ML throughout their business, solving mission-critical back-end problems in fraud detection and supply-chain prediction, and building innovative consumer-facing services such as recommendation systems. As datasets and computing resources grew rapidly over the ensuing two decades, it became clear that ML would soon power not only Amazon but essentially any company in which decisions could be tied to large-scale data. New business models would emerge. The phrase “Data Science” began to be used to refer to this phenomenon, reflecting the need of ML algorithms experts to partner with database and distributed-systems experts to build scalable, robust ML systems, and reflecting the larger social and environmental scope of the resulting systems.

I would want you to raise the lens of skepticism here. One advisor for this write up is Bezos, so it is but natural that Amazon be portrayed in a flattering light. Amazon funds its operations through profitable cloud services and now increasingly through a parallel system of advertising on its shopping website. Those are lucrative because they display advertisements to users “willing to buy” or “having money to spare”. Google has a generic advertising business that helps to funnel end users through various other algorithms.

It can also be argued that engineering requires a confluence of various principles- ethical, moralistic and “free of biases”. We could also see an immense amount of hand wringing at how the “ethical researchers” have been treated- though, frankly, the research is “abstractive” and offers no real-world solutions. In short, they don’t “manufacture” anything except, opinions. Hence, the superfluous debates.

Last but not the least (and also forms the basis for the next write up) is this important takeaway:

We now come to a critical issue: Is working on classical human-imitative AI the best or only way to focus on these larger challenges? Some of the most heralded recent success stories of ML have in fact been in areas associated with human-imitative AI — areas such as computer vision, speech recognition, game-playing and robotics. So perhaps we should simply await further progress in domains such as these. There are two points to make here. First, although one would not know it from reading the newspapers, success in human-imitative AI has in fact been limited — we are very far from realizing human-imitative AI aspirations. Unfortunately the thrill (and fear) of making even limited progress on human-imitative AI gives rise to levels of over-exuberance and media attention that is not present in other areas of engineering.

Don’t miss this!

Read more.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.