I won’t go into depth for the issues discussed herein, but suffice to say that ChatGPT works based on the “co-occurrence” of words. What remains to be seen is that parsing the input to “make sense” to ML (or whatever they call it) requires careful deliberation. Therefore, ChatGPT will remain in beta for the time being, as they slowly expand (and improve the input parameters). It is still interesting advance.
All-knowing machines are a fantasy | Emily M. Bender and Chriag Shah » IAI TV
First, what they are designed to do is to create coherent-seeming text. They do this by being cleverly built to take in vast quantities of training data and model the ways in which words co-occur across all of that text. The result is systems that can produce text that is very compelling when we as humans make sense of it. But the systems do not have any understanding of what they are producing, any communicative intent, any model of the world, or any ability to be accountable for the truth of what they are saying. This is why, in 2021, one of us (Bender) and her co-authors referred to them as stochastic parrots.
There will be opposition to the informational flows (or how they are processed) in the name of AI. The authors have done their due diligence to explain the idea in context, but their opposition isn’t solid. I honestly want ChatGPT to grow, because it would then allow me to make the conversational frameworks easier. I remember the earlier stumbling blocks to achieve the same. The technology has grown rapidly in the past five years (and it’s astounding!)