There was a “recommendation” from one of the “marketing gurus” that you can use blogging for “unpopular opinions”. It is difficult to agree because it would mean stoking the “fires of controversy”, but as reasonable people, we can discuss this more rationally.
The chain of thought sprung from Google making significant advances in “computer vision”- to be able to recognise mammograms with breast cancer. It is relatively trivial to train a model in spotting patterns, and the usual thought process is that “more data equals better results”. There are several caveats to it because the final paper had a lot of technical mumbo-jumbo that it would require specialist certifications, only to understand its impact. The reality appears to be different- to screen a large number of mammograms so that “highly trained radiologists” can bill CT Scans and MRI images. While this is a highly cynical view, Google’s efforts to step up its game in healthcare stems from “ability to offer this as a service to hospitals”. Of course, the original authors will get to crow about it in glitzy conferences with attendees nodding in unison but how widespread will the idea be? How will this be applicable in “underfunded” centres?
Likewise, I stumbled upon the promise of “adaptive radiation” with “artificial intelligence” based “computing” that promises the “unprecedented accuracy”. Let’s think about this rationally. How has the dosimetric advantage translated into biological outcomes? How have we closed the feedback loop with the clinical symptoms? Xerostomia still occurs, but we haven’t worked on the quantification of symptoms. Modulation has not been rigorously tested but only gleaned from retrospective institutional reports. In the same breath- how has particle therapy helped?
We are attuned to extensive marketing and the choice of words that tries to push a minor dosimetric gain as the “next-best-thing”. Biology matters. Dose fractionation matters. Dose escalation is still a utopia. Hypoxia matters (and not the immunotherapies or CAR-TCell). It represents a bankruptcy of ideas.
Yes, there has been an advance in AI, but it hasn’t permeated to the level of “general intelligence”.
Why do we suspend our common sense? Apple Watch, for example, was tested for analysing “heart rhythms”. Apple needed a use case scenario to sell its digital junk. After getting it “academically vetted”, it would make a strong use case scenario for insurance companies to underwrite it for “vulnerable seniors”. I can visualise the imagery of a happy old couple with their Apple watch” and product placements that how it saved his life when he had a fall, and it dialled 911 immediately.
It would be hard to counteract the strong notional belief of “differential privacy” which has nothing “different” about it. Yet, it allows one single company to dominate the market for healthwear and gain incremental revenue being underwritten by the risk pools. I’d say, its a brilliant strategy but a massive win for investors and the corporations and a big loss for healthcare and medicine. It is only going to create the rift for “digital haves and have-nots” and perpetuate the divide.
We need to put issues in perspective. My idea is not to “attack my colleagues” who have designed these studies, but they should pause and reflect that sipping with the devil won’t fetch them returns they hope for. We are merely cogs in the wheel, and I don’t know which way it would turn to crush me. It could be you (or anyone else).