This is a two-part series on the ethical considerations of AI. I am paraphrasing the long form from Stat News first. I would be following it up in the second part from another blog post.
Few if any of those patients has any idea about the AI involved in their care
Tweet

The pertinent highlights are as follows:
- Do the data from these uses of AI go back to the software makers? Patients should certainly be notified of that.
- Since February of last year, tens of thousands of patients hospitalized at one of Minnesota’s largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model.
- Few if any of those patients has any idea about the AI involved in their care.
- That’s because frontline clinicians at M Health Fairview generally don’t mention the AI whirring behind the scenes in their conversations with patients.
- Do the data from these uses of AI go back to the software makers?
- Patients should certainly be notified of that.
- If hospitals don’t at least periodically evaluate the AI tools they are using, it’s like not calibrating their x-ray or their clinical lab.
- The author thinks it’s all well and good to say “the AI won’t get used to make decisions” But what do you do in 5 years when the generation of young docs come up and are reliant on it as a crutch rather than doing the hard work of learning to make decisions themselves?
- Look at any industry nowadays.
- You’re hard pressed to find something that doesn’t absolutely rely on technology.
- Today’s experienced docs may not completely rely on it, but the kids that grow up with it will.
- The authors call that de-experting.
- The developmet of dependency which rendors a new cause of death, statistical insignificance..
- They will be statistically insignificant “AI Orphans”(AIO).
- For at least the 7-15 yrs AIO will need a real doctor.
- The author finds it reprehensible that physicians would not disclose the use of AI to patients.
- What harm in telling people that several tools… like lab tests, x-rays, etc….
- Are taken into consideration when making decisions about care.
- Getting a patient’s consent, smacks of deception.
- Nobody wants to think that they’re being hoodwinked, especially when they’re sick, by someone like a physician who patients should totally trust.
- The author finds that quote about not telling a patient an x-ray is why they are sent home is patently false.
- Most docs should be sitting there and telling you whether your x-ray is good and if that means you can maybe go home.
- Can the author as a patient deny AI involved in the decisions taken for the author’s health.?
- What good does it to request a second opinion if that is made up artificially and not by the second Specialist?