It is not surprising they will be “published”, even if the evidence is visible in plain sight that generative text is insufficient. Whither peer review for junk science? Nevertheless, the editorial linked here does err on the side of caution:
ChatGPT: friend or foe? – The Lancet Digital Health
In the realm of health care, Sajan B Patel and Kyle Lam illustrated ChatGPT’s ability to generate a patient discharge summary from a brief prompt. Automating this process could reduce delays in discharge from secondary care without compromising on detail, freeing up valuable time for doctors to invest in patient care and developmental training. A separate study also tested its ability to simplify radiology reports, with the generated reports being deemed overall factually correct, complete, and with low perceived risk of harm to patients. But in both cases, errors were evident. In the discharge summary example provided by Patel and Lam, ChatGPT added extra information to the summary that was not included in their prompt. Likewise, the radiology report study identified potentially harmful mistakes such as missing key medical findings. Such errors signal that if implemented in clinical practice, manual checks of automated outputs would be required.
It concludes, somewhat erroneously:
Widespread use of ChatGPT is seemingly inevitable but in its current iteration careless, unchecked use could be a foe to both society and scholarly publishing. More forethought and oversight on model training are needed, as is investment in robust AI output detectors. ChatGPT is a game changer, but we’re not quite ready to play.
There are services that can detect the generated text. It may not work ALL the time. The editorial seems written in haste, raising heckles for “AI as a warning shot”. It isn’t. It’s co-option for most part by taking out the associated drudgery.