A brilliant write up and the most important takeaway here:
Most AI systems used today—whether for language translation, playing chess, driving cars, face recognition or medical diagnosis—deploy a technique called machine learning. So-called “convolutional neural -networks,” a silicon-chip version of the highly-interconnected web of neurons in our brains, are trained to spot patterns in data. During training, the strengths of the interconnections between the nodes in the neural network are adjusted until the system can reliably make the right classifications. It might learn, for example, to spot cats in a digital image, or to generate passable translations from Chinese to English.
The idea to turn AI into more “human” is actually a philosophical construct. It is more to do with the “misdirected” efforts to get the “ethics” into the debate. Most reasonable people won’t do unreasonable things. If you perforce ethics into the complex interplay of development and testing, you are only complicating issues where there are none. Ethics is not even a solution where the problems stand as of now. We are not yet at the cusp of the AI “revolution” despite the hoopla. We have only begun to understand how the process works; harnessing mathematics to spot patterns rapidly.
However, it is a very “cynical” point of view.
Here’s another blurb from the same write-up and sums up my perceptions perfectly:
The problem is that deep learning has no way of checking its deductions against “common sense,” and so can make ridiculous errors. It is, say Marcus and Davis, “a kind of idiot savant, with miraculous perceptual abilities, but very little overall comprehension.”
There is no current understanding of making the AI more “predictable” with a sense of intuition. We need to understand that. Therefore, we need to keep our filters about the hype around AI and the fears that it is “taking away the jobs”. A fascinating read indeed.