Here’s something remarkable from MIT- a publication which otherwise is known to “hype up” AI has some scathing remarks on OpenAI. They got a “researcher” “help” them to generate some samples and have practically blown it out of water. Here’s the blurb:
s GPT-3 an important step toward artificial general intelligence—the kind that would allow a machine to reason broadly in a manner similar to humans without having to train for every specific task it encounters?
OpenAI’s technical paper is fairly reserved on this larger question, but to many, the sheer fluency of the system feels as though it might be a significant advance.GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about | MIT Technology Review
I think they sounded like someone with an axe to grind. The GPT-3 is a significant advance in the AI generation of text and may also have nefarious users- for example, fake news. It isn’t my focus, but we have machine generated and human readable text that makes “some sense”. I won’t focus on its practical applications, but in the limited subset I have read or seen, it has intriguing possibilities.
If you dig deeper, you discover that something’s amiss: although its output is grammatical, and even impressively idiomatic, its comprehension of the world is often seriously off, which means you can never really trust what it says.
I am not sure how the creators foresee its application, but clearly MIT is miffed.