I was surprised to see the “leadership” example from the UK. This blog post is clear, crisp and shorn of any “AI potential” for healthcare. It doesn’t make any lofty statements.
It included MHRA, NICE, CQC, the Information Commissioner, the National Data Guardian, the Health Research Authority, the Centre for Data Ethics and Innovation, NHSD, NHSR, the Better Regulation Executive and the MRC.
Here’s the key takeaway (emphasis mine):
Communication with clinicians, innovators and – crucially – the public. We need to keep explaining what we are doing, so people with views, expertise and concerns can feed them in rather than feel there is a secret process being done to them.
I think it’s the clinicians need to understand what AI/ML is. I don’t claim to have an expertise in mathematical models, but I work on “heuristics”- a fancy term for “gut feeling” wherein, I know it can achieve something.
Regulation is the crucial statement; along with data localisation, open API’s as well as a “sandbox” to try out various AI “products”. Open API’s would help others to tap into databases and create more meaningful patterns for interpretation. I am working on an idea on similar lines to explore its potential in a hospital setting.
Besides, health informatics needs to gear up. We need to be able to explain conceptual ideas that underpin “machine learning”. A “computer” won’t gain the ability to “think on its own” but instead help to spot patterns in seemingly disparate datasets. We cannot possibly process the whole genome, for example, in the clinic (as of now) but it has to become feasible to introduce it in a structured way.
Regulation has to evolve continuously as the computational progress happens because otherwise, it would become sclerotic. We are in an exciting phase of “industrialisation” which would define out healthcare in the future.