Consumer based “supercomputers”

Richard Waters writes:

Google’s specially designed chips, called TPUs, process the signals inside the most advanced deep learning systems. The goal is not precision of each operation but the overall picture they can assemble, as billions of electronic “neurons” in these electronic brains search for the patterns in mountains of data.
But the term obscures what are some of the most difficult problems in computer science. Understanding people’s true intent, divining underlying meaning in the world — these are profound questions, with huge implications for any company that can truly solve them. Google’s current business model may involve monetising people’s attention through advertising, but the long-term implications of a computer intelligence that can truly mediate the world are immense.

(emphasis mine)

This write up on Google came in the immediate time period of Google’s I/O. From a company that promised to do no evil has become the sole custodian of millions of users and is radically transforming the web in ways unimaginable before for more data concentration. Therefore it is imperative to understand it because Google spans business operations beyond search.

While the developments may sound exciting, I personally feel that we are hurtling towards the idea of “totalitarianism a singular concept where the virtual replaces the reality and the lines will obscure. Therefore, I find it oddly unnerving when the computing resources are being developed to understand human motivations and thoughts. These will have a profound impact on its users.

For example, it’s conversational interface called as “LaMDA” is a radical new take on the conversational AI; distinct from GPT-3. Google processes most of the AI functions on the device itself- for example, Threema debuted the application update for on-device image recognition. I was told Huawei had done it before. It would minimise the lag. As users are conditioned to believe the conversational AI and pushed for “cost savings” versus “human interactions” – it would have a significant impact on the human resources as companies seek to cut costs. What would it take to have Alexa answer the medical queries?

Japan’s Fugaku, the world’s fastest supercomputer
All this power is shrunk on your palm-gradually.

Google’s introduction to LamDA:

Taken from their blogpost

LaMDA’s conversational skills have been years in the making. Like many recent language models, including BERT and GPT-3, it’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next. 

These “qualities” and a different model are indeed a different step in the conversational aspect. I had long tried to develop a Telegram based bot with the NLP (natural language processing)- and failed. The integration is terrible; API’s were bad and it was incredibly expensive. That’s why Google’s foray in the healthcare is even more of a concern. Interestingly, they have kept it below the radar to avoid any excessive public scrutiny.

The usual disclaimers:

But the most important question we ask ourselves when it comes to our technologies is whether they adhere to our AI Principles. Language might be one of humanity’s greatest tools, but like all tools it can be misused. Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use. 

I think this statement is to do with the rather public exit of their AI researchers, and a showdown which would have required a considerable effort for firefighting. Therefore, these disclaimers have started appearing in the blog posts. I don’t think it would be limited for this use purpose alone and like everything, the AI can be weaponized. I will be looking at these developments, with interest.