Christopher Mims writing for Wall Street Journal:
Google actually led the way with on-phone processing: In 2019, it introduced a Pixel phone that could transcribe speech to text and perform other tasks without any connection to the cloud. One reason Google decided to build its own phones was that the company saw potential in creating custom hardware tailor-made to run AI.
These so-called edge devices can be pretty much anything with a microchip and some memory, but they tend to be the newest and most sophisticated of smartphones, automobiles, drones, home appliances, and industrial sensors and actuators. Edge AI has the potential to deliver on some of the long-delayed promises of AI, like more responsive smart assistants, better automotive safety systems, new kinds of robots, even autonomous military machines.
The Edge computing has fascinated me. On premises/ On-device computing paradigms are rapidly shifting towards the mainstream. Of course, they also come with a huge trade-off in privacy – though mobile processors are capable of running AI simulations (on a limited scale). The most dramatic enhancements have been observed in “computational photography” with picture quality, compared to professional cameras.
Here’s more from the author:
In our everyday lives, things like voice transcription that work whether or not we have a connection, or how good it is, could mean shifts in how we prefer to interact with our mobile devices. Getting always-available voice transcription to work on Google’s Pixel phone “required a lot of breakthroughs to run on the phone as well as it runs on a remote server,” says Mr. Rakowski.
Google is planning to launch its own custom chip called as Tensor Processing Unit (TPU). I’ll be keeping a close watch on the developments.