I was thinking about the usual dilemma that we face- what are the chances that the patient presenting to us will be “cured”.
If we go back to radiobiology, the best-case scenario is that we would be able to effect a 2 log cell kill before the disease stops being clinically apparent- that is even below the threshold of the detection. However, is that the true “detection”? We know it is not true.
Then why millions of dollars are being poured in CAR-T Cell therapies? Because that is being marketed as the “significant cancer advance” with all kinds of statistics plastered over. Euphemistically speaking, let’s see what spaghetti sticks to the wall.
Can research predict cancer outcomes? Can a personalised genome set predict outcomes? Can we have specific genomic signatures predict the kind of fractionation a person needs? Can we answer those questions?
Prediction is very hard. Nate Silver is maybe the best political predicter alive, and he estimated a 29% chance of Trump winning just before Trump won. UPenn professor Philip Tetlock has spent decades identifying “superforecasters” and coming up with complicated algorithms for aggregating their predictions, developing a prediction infrastructure that beats top CIA analysts, but they estimated a 23% chance Britain would choose Brexit just before it happened. This isn’t intended to criticize Silver or Tetlock. I believe they’re operating at close to optimum – the best anyone could possibly do with the information that they had. But the world is full of noise, and tiny chance events can have outsized effects, and there are only so many polls you can scrutinize, and even geniuses can only do so well.
Cancer grows in many unpredictable ways. The key is not to push for those “outsized influences” but the smaller ones. Maybe perhaps, I might be able to work on my own grand unified theories.