On many occasions when we find ourselves talking to IT executives we hear how they have a suite of aging applications built using soon to be, if not already end of life technologies. More often that not these systems are hosted in costly data centers managed by 3rd parties and with inflexible contracts. These applications are critical to the successful operation of the business, while at the same time being one of the largest sources of business and operational risk.
They are all too aware that there is an chance to make improvements, optimize processes and unlock new opportunities. To do this fully however is going to be disruptive and brings in many dependencies. For instance the commitments of existing ‘BAU’ work, other change programmes and not least the existing plans and budgets of the departments where the end users work.
This is a fascinating write up on working with new “tech-stacks” to replace the old but critical technologies that have reached their end-of-life. As enterprises grow, they depend on the “legacy frameworks” because if it is not broekn, why replace it?
Here’s an insight from the report:
Most legacy systems have ‘bloated’ over time, with many features unused by users (50% according to a 2014 Standish Group report) as new features have been added without the old ones being removed. Workarounds for past bugs and limitations have become ‘must have’ requirements for current business processes, with the way users work defined as much by the limitations of legacy as anything else. Rebuilding these features is not only waste it also represents a missed opportunity to build what is actually needed today.
Take the case of EMR’s. They represent significant monolithic structures, which are often a patchwork of “feature sets” added over time. When it comes to developing clinical decision support systems and running AI solutions on datasets, it won’t allow. Backdating feature set or asking vendors to open up ports to integrate newer applications without threatening the integrity of the system is a huge ask. Some vendors allow for “sandboxing” to “play around with the features” but will not provide interoperability unless there’s money on the table. Code changes require investment of time/ bureaucratic processes and investments with no clear justification. Efficiency is a metric that doesn’t come with a ROI.
Here’s a case scenario:
Many of the key business operations were handled by the same mainframe, which had been initially commissioned during the very earliest days of their e-commerce operations. Extracting elements from this system was clearly going to be technically challenging. At the same time business leaders having seen several disruptive failed projects wanted to minimize any further disruption to their staff. A further challenge was current processes and systems made it extremely difficult to prioritize product lines to migrate if a more incremental approach was used. In short it was very difficult to understand which things they sold made money and which things didn’t so it was felt the only option was to move everything all at once. Based on these challenges it was felt just replicating what they had was the best and lowest risk approac
While the example is that of a “logistics company”, there are many unknown variables that can influence the outcome. I won’t specifically detail on how they approached this problem, but the parallels (and business challenges) to address the ones faced by the fictional logistics company and healthcare enterprise are the same.
Therefore, EMR adoption requires an “agility mindset”; like a modular system which leaves the platform core intact while you can play around with the different modules. It will also ensure a rapid A/B testing and understand how the user interaction happens through heat maps or similar techniques and define the most optimal UI. EMR’s can be fun of they are designed with the end user in mind.