Is regulation killing the medical software innovation?

blue and red plants
Photo by Juanjo Menta on Pexels.com

Once in a while, I stumble on interesting blog posts (either on Twitter) or through recommendations from a community powered website. It merits a deep consideration and often requires a thorough analysis in the ideas.

I have been writing on the open source and forks and the stupid copyright licenses that accompany them. You just need one lawyer to spoil the party!

Jokes apart, regulation is essential, but the extent of regulation is debatable. We test those assertions in the court of law about its applicability. For example, what’s the point of having an AI based “radiology software” to “diagnose” and then generate “false reports”? I think the idea germinated in the mind of a busy radiologist who was fed up of the grunt of looking at “normal chest Xray” and it quickly created a raft of “startups”.

Here’s one giveaway for a startup looking at the AI in radiology- always remember that it is an actual human behind the image and you need to be aware of the real-world implications. I have sat through presentations of some start-ups trying to accomplish “everything” from a presentation deck without committing the first line of code to accomplish at least one thing first. There is no point in naming names, but the VC was hawking the “product” to get easy money upfront. In retrospect, I refused. It didn’t feel right in my metric of focusing on the niche.

Coming back to the regulation. Here’s wan interesting quote from the author:

Having perfect software is a utopic scenario. Software is never perfect. And even if most people build safe stuff most of the time, just a small group of reckless individuals is enough to cause disaster.
That’s where regulation comes in. Regulation basically says: “Hey, this one company screwed up big time recently, so from now on, every company has to comply with this list of things we came up with. Those measures will surely prevent that from happening again!”.
And that sucks. Because what did we do? We took the maximum observed stupidity of our society (writing buggy code that emits radiation) and now we assume that 1) everyone else is just as stupid and 2) that some rules we came up with will prevent the problem in the future.

I am not sure if the regulatory authorities are examining the code. If they are examining just the documentation which states how the code is being run, well, that’s utterly stupid. Therefore, the trust in the government agencies has to be implicit by holding the code in “escrow” and not use the code for anything else.

teal white and pink paint
Photo by Zaksheuskaya on Pexels.com

Here’s when the author describes the complexity in software leading to more audits and possibly delaying it for the end users.

Building medical software has tons of additional overhead. This leads to slower development, additional hires and more audits. What do companies need when development slows down, head count increases and auditors knock on your door? Right – more money.
Only companies with a lot of funding (and time) are able to enter the market. A two-person startup can’t afford to hire expensive staff for regulatory compliance. And they sure as hell can’t afford to put countless man-hours into documenting regulatory-compliant processes! What sort of processes do you need in a two-person startup, anyway?

I think the reason is more complex than just being a “well-funded” company. For example, the existing duopolies in the radiation equipment machinery- they palm off the PhD thesis and monetise the “innovations”. Or “acqui-hire” the “startups” with ideas. The well-funded companies have “ongoing existing relationships” with the regulators and there is no way to independently verify the assertions.

I think the better way forward is to open source the code and the hardware through regulation. The “android” model of making the smartphones can be looked at in earnest define the path forward for improving accessibility.

I understand that these ideas are perhaps too radical, but we haven’t seen potential breakthroughs happening in a long time. The only major shift was from 2D era to modulation, and that was nearly 30 years back. I see a lot of senior colleagues crowing and he hawing about DIBH but that’s unproven in the clinical domain.

Here’s something about what the author points out in the blog with parallels from aviation. Acceptable failures and engineering have progressed to a point where it is relatively safe to fly, and I’d second that. Boeing Max Fiasco is because of an engineering company trying to turn into a “sales company that also manufactures planes” scenario and lost its roots.

Safe engineering is mercifully coming back in the vogue and hopefully safe software and hardware will be back in parlance too.

working pattern internet abstract
Photo by Markus Spiske on Pexels.com

Hence, I have always remained a huge fan of micro-kernel architecture. Remember that BlackBerry 10 pioneered the UI interfaces we see now in iPhone. QNX was fascinating because it wasn’t “fault-tolerant”. Kernels are buggy with more complexity added in. BlackBerry 10 failed because of inept marketing and ironically because developers couldn’t palm off the user data because of complex restrictions around the code. Circa 2012. Almost a decade earlier.

To surmise, regulation is essential, but it requires a radical overhaul of how it is administered to allow software “progress” and allow “innovation”. We can trim the costs of compliance to allow wider dissemination of applicability of technology.