Chatbots: Still Dumb After All These Years | Mind Matters
Blaise Agüera y Arcas, the head of Google’s AI group in Seattle, recently argued that although large language models (LLMs) may be driven by statistics, “statistics do amount to understanding.” As evidence, he offers several snippets of conversation with Google’s state-of-the-art chatbot LaMDA. The conversations are impressively human-like, but they are nothing more than examples of what Gary Marcus and Ernest Davis have called an LLM’s ability to be “a fluent spouter of bullshit” and what Timnit Gebru and three co-authors called “stochastic parrots.”
I stumbled on this fascinating write up that debunks most myths around AI achieving “general intelligence”. Prior to my efforts to put in a chatbot on Telegram, a deep dive in the fascinating world of natural language processing libraries opened my eyes. It was clear that an integration with the bot would be feasible, it would have served no useful purpose. Most of the solutions in the market at that time were related to white-labelling – that means general purpose software with no provisioning for niche use case scenarios. I didn’t proceed because of associated costs, but instead created a simple bot that gave responses to pre-answered “faq’s”. I couldn’t attract any funding for it because it wasn’t “sexy enough”.
Nevertheless, the author lists the GPT-3 or LAMDA models and “debunks” the hype. I am not dismissing the developments entirely, but the fact that these developments like GPT-3 are possibly a precursor to something hidden in the background. These are technology demonstrators and are well funded. It would be a mistake to disregard them completely.