This is an interesting take here:
AMD CEO: The Next Challenge Is Energy Efficiency – IEEE Spectrum
Despite a slowdown of Moore’s Law, other factors have pushed mainstream computing capabilities to double about every two and a half years. For supercomputers, the doubling is happening even faster. However, Su pointed out, the energy efficiency of computing has not been keeping pace, which could lead to future supercomputers that require as much as 500 megawatts a decade from now.
I am closely watching the chip developments (as a source of computing power) and the challenges of performing computations for AI.
The last quote is important:
But processor innovation in itself won’t be enough to get to zettascale supercomputing, Su said. Because AI performance and efficiency improvements are outstripping gains in the kind of high-precision math that’s dominated supercomputer physics work, the field should turn to hybrid algorithms that can leverage AI’s efficiency. For example, AI algorithms could get close to a solution quickly and efficiently, and then the gap between the AI answer and the true solution can be filled by high-precision computing.
Just replace it with the on-board treatment planning systems (and solve the challenges of “on-the-couch computation) for “treatment” calculations. The need to track these changes will become immediately apparent.