AI for healthcare: How to Build Artificial Intelligence We Can Trust

These are interesting “computer science” problems and the authors have advanced their philosophical arguments as well- in effect to make the computers “aware”.

I think it is a monumental waste of time. An awareness of causality wouldn’t offer any practical advantages unless they want a better “self-driving vehicle”.

In particular, we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets — often using an approach known as deep learning — and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space and causality.

Today’s A.I. systems know surprisingly little about any of these concepts. Take the idea of time. We recently searched on Google for “Did George Washington own a computer?” — a query whose answer requires relating two basic facts (when Washington lived, when the computer was invented) in a single temporal framework.

None of Google’s first 10 search results gave the correct answer. The results didn’t even really address the question. The highest-ranked link was to a news story in The Guardian about a computerized portrait of Martha Washington as she might have looked as a young woman.

Opinion | How to Build Artificial Intelligence We Can Trust – The New York Times

This is a specific example of how “thought leaders” wish to “re-imagine” machine learning by “rebooting” it. We are better off without these charlatans. These opinions are usually a waste of time and effort and are pushed out to shape the dominant narrative to churn out PhD’s for something that wouldn’t hold any value.