ChatGPT and impact on medicine

So far, I haven’t blogged about this (but instead wrote a recently published long-form) with a co-author. We had an interesting brain-storming session around it, and it took me several days to get it “right”. My ideas lean towards conservatism, and therefore, I could only see “disruption” in the process. I am not wired to see “rosy nonsense”.

Vinay is that kind of person. I admire his throughput on editorials and long-forms (including the basis of ideas), in addition to his podcasts and active Twitter presence. (I hope he’s automated it!) Nevertheless, he had an interesting point here:

Chat GPT will change Medicine

Here is my quick summary: Chat GPT responds to questions or prompts with coherent and mostly accurate answers in complete sentences and full paragraphs. It can draw upon a deep well of data, and it can make arguments and comparisons. It can even draft statements or documents. It can formulate interesting, but not highly original arguments. If you want some ideas for how to get started ask it to tell you about minor celebrities or imitate the style of writers or other noted people. Ask it to draft a tweet in the style of <insert person>. The responses may be uncanny.

He finds the following possibilities:

When it comes to academic writing, Chat GPT will soon become the affordable medical writer. Medical writers will supervise Chat GPTs— and only the ones with the highest levels of content knowledge will survive. Most will need new jobs.

Most review articles will be written by Chat GPT. They will also be mostly read by Chat GPT, and users will ask the software to provide shorter summaries. Ironically then, Chat GPT will be used both to generate writing and parse it into digestible pieces.

He mentions this as a stastics:

In the world of Chat GPT— it will be harder to distinguish oneself this way. Currently, original thinkers are limited by the time it takes to draft one’s ideas. With Chat GPT this restriction will be removed. Copying will be hard as original thinkers will have a head start and unlimited drafting potential. My team published 61 articles in the last year with tremendous effort. Meanwhile we had many more unique ideas we did not have time to draft. With Chat GPT we will be unchained and will easily hit 100-200 articles per annum. Research teams that borrow our ideas will be at severe disadvantage.

This is what he says (and I agree):

Chat GPT will permit more mediocre people to become doctors. There are many important aspects of being a doctor. Speaking compassionately & empathetically, performing physical labor, and making sound medical decisions. These are all equally important, but Chat GPT will dramatically change the last.

Currently, a mediocre thinker can rely on algorithms or flow charts to make medical decisions, but these can be in error, and the person may have doubt. Chat GPT will be used to make these decisions going forward. Currently, you can type the details of a case and Chat GPT can give treatment ideas. In the future, Chat GPT will extract details from the chart and make recommendations by itself.

This is the heart of the argument around the algorithmic approach to medicine and justifying the use cases around empiricism. While that’s the topic for another day, the reason I am linking to this blog post here is what I have mentioned before – using blog to generate novel ideas. I blog here to create more insights that flow in long-form published elsewhere. So while ChatGPT might become better around “generating ideas”, executing them will become even more painful.


Once this happens, and given other trends in America, very likely more mediocre thinkers will be recruited into medicine. I am not using mediocre as an insult here. I am just trying to say that when it comes to critical medical thinking there is a spectrum, and Chat GPT will level this tremendously. Of course, the very brightest doctors will outperform the software, but these will be few and far between.

I have mentioned the pedagogical outcomes, and if you train the medical doctors in terms of algorithmic approaches, the “powers of deduction” will be blunted to disfavour critical thinking. Medicine, as science, is not deterministic, but stochastic where the sum output depends more on the unseen variables.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.