Hallucinating chatbots

I usually dont endorse the “paid press releases” that dot the New York Times but only link to them for the general audience to understand. NYT is a cesspool; its the worst form of journalism masquerading as a “credible source” (which it isn’t). It is the propaganda arm of the vested interests. Having understood that in the background, it is important now to link to their write up.

Chatbots May ‘Hallucinate’ More Often Than Many Realize – The New York Times

Chatbots like ChatGPT are driven by a technology called a large language model, or L.L.M., which learns its skills by analyzing enormous amounts of digital text, including books, Wikipedia articles and online chat logs. By pinpointing patterns in all that data, an L.L.M. learns to do one thing in particular: guess the next word in a sequence of words.

Because the internet is filled with untruthful information, these systems repeat the same untruths. They also rely on probabilities: What is the mathematical chance that the next word is “playwright”? From time to time, they guess incorrectly.

A little more:

But researchers warn that chatbot hallucination is not an easy problem to solve. Because chatbots learn from patterns in data and operate according to probabilities, they behave in unwanted ways at least some of the time.

I wont link further to the name of the companies promoted here but they are classical paid endorsements. It is difficult to pin point for a casual reader but apply the skeptic lens and then you start questioning the motives. Nevertheless, this is clear that bots hallucinate and give incorrect information. Be careful when you use them.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.