GPT-3: The astonishingly good but predictably bad AI program

GPT3 primer by Financial Times:

GPT-3, which stands for generative pre-trained transformer version three, is, in essence, a super-sophisticated auto-complete function, which sounds less than exciting.

But what makes GPT-3 remarkable is its scale and flexibility and the possibilities for future development. 

Drawing on hundreds of billions of words ingested from the internet and using neural network technology similar to that used by Google DeepMind’s AlphaGo, GPT-3 was trained to spot and then replicate sophisticated patterns. GPT-3 contains 175bn language parameters, more than 10 times the next biggest equivalent model.

The astonishingly good but predictably bad AI program | Financial Times

The possibilities are endless. One way GPT3 can find its utility is in “patient support”- although it would require a lot of training to understand the context. The author als cautions about the “dark uses”- to mount disinformation campaigns at scale and despite the ethical objections, it misses the point completely.

It is relatively easier to set up websites at scale and automate the process. Tiny variations in text (and context) can be made. It may be easier to dismiss this “advance” as being “silly” but how can you stop the model from iterating further?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.