Imagine a future where roads have intelligent traffic lights that have sensors to detect traffic flow in real-time. These lights could adjust the flow of traffic to avoid congestion or idle wait times. Such a system is most effective if each sensor constantly shares its data with all nearby sensors in the network. But how would we orchestrate such a system? We wouldn’t want to send all the raw data back to a central server that’s hundreds of miles away. Instead, we want all the decision making to happen at the edge. Perhaps each node is listening to data updates from all nearby nodes and making decisions independently? Alternatively, a group of nodes might elect a leader that orchestrates traffic between them, and leaders might communicate with each other through a similar mechanism. I don’t know how we’ll solve this problem, but I know it will help improve traffic.
I started with the potential possibility first. I have been closely reading up the developments in Edge computing, because it is precisely these GPU enabled tasks that will accelerate the adoption of technology in hospitals. Edge computing will make the robotic surgery more efficient – coupled with advances in robotics+computer vision and complex manoeuvres, they will essentially perform under “human supervision”. You can’t imagine the round trip of data from central servers to operating table. Edge computing will essentially serve as the “middle layer” and undergo “performance review” at the “end of the procedure”.
Earlier on, I had an interesting discussion with surgeons watching YouTube videos of surgical procedures. It is to be aware of the anatomical quirks (even from a routine cholecystectomy). A system trained on several thousands of procedures will vastly outperform any “experienced” surgeon for routine procedures. As always, robotic surgeries were pushed by more aligned incentives (like modulated radiation) rather than a problem which required a solution.
In the future, developers will have to balance yet another constraint when architecting their applications — politics. Several countries are writing laws governing data flow. For example, China has mandated that data of Chinese citizens cannot leave the mainland.
Typically, this translates to companies hosting a “China stack” siloed from the rest of the world. It doesn’t require a giant leap to imagine that other countries may someday follow suit. Of course, it would be cost-prohibitive for companies to launch a new stack for each country.
In a regionalized architecture, the onus is on the developer to manage the flow of data in compliance with each county’s laws. Counter-intuitively, a global architecture helps developers because we can set jurisdictional boundaries at the object level.
This is interesting perspective from the policy framework. A US trained surgeon will defintely not share the accumulated data with “rest of the world”. I find it hard to imagine that there will be free flow of data.
However, it still doesn’t change the fact that edge computing will accelerate automation in a meaningful way. I am not a techno-optimist but I can genuinely appreciate this as a significant advance in the era of computing.