GPT-3: The Obligatory GPT-3 Post

This is a fascinating deep dive on the language model called as GPT and if my understanding is correct, is used to churn out write ups through “AI”. I have used some of them in the past and they are decent as long as no one is looking into them deeply for any obvious relational dependencies.

OpenAI has released a new paper, Language Models Are Few-Shot Learners, introducing GPT-3, the successor to the wildly-successful language-processing AI GPT-2.GPT-3 doesn’t have any revolutionary new advances over its predecessor. It’s just much bigger. GPT-2 had 1.5 billion parameters. GPT-3 has 175 billion. The researchers involved are very open about how it’s the same thing but bigger. Their research goal was to test how GPT-like neural networks scale.

The Obligatory GPT-3 Post | Slate Star Codex

Follow the link if you are curious on how neural networks scale. I am wondering if they could be trained for the EMR! It would take away the hassle of typing out everything.