GPT3 making inappropriate suggestions

Ryan Daws writes:

Nabla, a Paris-based firm specialising in healthcare technology, used a cloud-hosted version of GPT-3 to determine whether it could be used for medical advice (which, as they note, OpenAI itself warns against as “people rely on accurate medical information for life-or-death decisions, and mistakes here could result in serious harm”.)

The screenshot from the linked website seems to suggest that the medical chatbot has “failed”. I have explored this aspect earlier on my quest to automate and I realized that getting the NLP libraries to work is extremely difficult because it is difficult to understand the context (always) and it requires deep engineering from ground up to provide “in-depth” awareness; especially in the enterprises. I stumbled on numerous struggling entrepreneurs and while we discussed several ideas, it never fructified because they were limited in their scope.

Can GPT-3 solve the problem? Besides a click-bait, it appears that we are still far off from the issues related to energising the bots in the medical field. Hospitals are struggling with the legacy systems and I don’t expect them to look into this besides outsourcing it. Although, it has immense value in answering repetitive queries.