I am not fond of WSJ, but sometimes they have an “out-of-box” reporting. While the linked write up features the impact of AI chatbot in the call centre, just draw the parallels for healthcare. Enterprises are always in the cost cutting mode (but never for the salaries of the C-suites). The administrators will bend over backwards to demonstrate the “cost-savings” (to justify the introduction of new technology), but will not address the friction it causes between the “workers” and the “factory”. Transpose the same scenario in a clinic, where patients and doctors are increasingly finding themselves in a transactional environment. There will be an intermediary that will intercede in this “relationship”; to drive forth the “cost-efficiencies”.
I’ll link the important passages first, and then offer my explanation on how I feel this can take a turn for the worse.
The AI Chatbot in Your Workplace: Efficient, Bossy, Dehumanizing – WSJ
Recently, with business growing, HomeServe hired a new agent to assist Mr. Bragg and his co-workers. Named Charlie, she’s an artificial intelligence-powered virtual agent that HomeServe built using a conversational AI platform from Google and other technologies. She answers 11,400 calls a day, routes them to the appropriate departments, processes claims and schedules repair appointments. She whispers in agents’ ears whether a customer is eligible for certain coverage plans and types on agents’ screens why the customer is calling.
Charlie isn’t universally liked inside the Chattanooga call center. She can be controlling, including requiring agents to say specific words when they talk to customers, and penalizing them if they don’t. She sometimes routes callers to the wrong department. “We’re taking up a collection to get Charlie a hearing aid,” said Mr. Bragg’s colleague Robert Caldwell, another top-selling agent, sitting in a cubicle nearby.
Orwellian surveillance has never been real!
Here’s another:
But when AI handles the simple stuff, say labor experts, academics and workers, humans are often left with more complex, intense workloads. When algorithms like Charlie’s assume more human decision-making, workers with advanced skills and years of experience can find their roles diminished. And when AI is used to score human behaviors and emotions, employees say the technology isn’t reliable and is vulnerable to bias.
These developments highlight the potential for “disruption” as technology advances in the clinic. What if the AI is being placed in the clinic to understand what the doctor-patient relationship entails? A super assessment tool? An overarching overlord to make an attempt to understand and then score the “healthcare workers” about the “conversions” to “actual treatment”?
The constant “overwatch” has negative outlook. Here’s another:
When humans turn over decision making to a machine, they no longer use their own knowledge and experience—just ask taxi drivers whose street knowledge has been superseded by Google Maps. In her research about call-center automation, Virginia Doellgast, professor of comparative employment relations at Cornell University, has found that humans who are tightly monitored by an algorithm, forced to follow a script or have little control over how they work are more likely to get burned out and find it harder to solve customer problems.
Interactions between humans cannot be subsumed to algorithms. The new snake oil is “sentiment analysis”:
They are using conversational AI to detect and measure more subjective human emotions and behavior through a technique called sentiment analysis, a tool that decides if conversations are positive, negative or neutral. Some models evaluate words and context to score conversations, and others include voice pitch, tone and cadence. Comcast analyzes most conversations between customers and agents and scores employees on behaviors such as being “warm and friendly,” and “make it effortless.”
And as I mentioned earlier, there will be mention about the “cost savings”:
“When Charlie gets involved, time resolution is faster for the customer,” said HomeServe USA Chief Executive Officer Tom Rusin. During a major December storm, she helped 10,000 customers, equivalent to 12% of the total affected, to book claims and schedule repairs without talking to an agent. At this rate, she will pay for herself within 18 months of purchase. “It’s taking out hundreds of thousands of minutes from our calls a year,” said Mr. Rusin. “And a minute’s expensive.”
The real human impact story here:
She had worked in customer service her whole adult life, as a hotel front desk manager and a cashier in retail and fast food, and thought call center work would be fulfilling. Although she enjoyed helping customers, she kept scoring low on the AI-generated sentiment scores. She has tinnitus and speaks with a monotone speech pattern, she said, and doesn’t always hear clearly if callers speak softly.
The AI marked her down for not using specific keywords, she said, although she never discovered what words she was supposed to say. She said her supervisor listened to the calls and told her, “it sounds like you’re doing a really good job.”
Although the job paid $20 an hour and included a free cable package, she decided it wasn’t worth the cost. “I got to the point where I couldn’t erase it anymore.” Nine months into the job, she quit.
When this wasn’t enough:
HomeServe has big plans for Charlie this year. The company will introduce real-time guidance for agents that will suggest what they should say or do next. “It will auto-populate the script so [an agent] doesn’t have to think so much about what to say to get the conversation started,” said Ms. Cloud.
Pop-ups on agents’ screens will suggest the “next best action,” she said. It might detect that a customer already has gas-line insurance and suggest the agent sell water-line coverage as well. Charlie will tell agents how to speak. “She might say, ‘Hey, there’s a long pause here or you’re talking too fast,’ ” Ms. Cloud said. She emphasized that it will be voluntary, not required, for agents to take Charlie’s advice. Also on the agenda: Charlie will start scoring the humans on their call performance.
It is critical to understand how they will impact the trust factor between humans and companies. I am not a fan of the call-centres, but they are a “necessary evil” for the companies. Yes, it’s sunk cost without a specific return for the investments, but companies will do everything to turn this around. There are numerous case-studies on how a shoddy customer service is actually profitable for the company to track the high-net-worth individuals instead.
Healthcare is in for a very rough ride coming this year and beyond. We should brace ourselves.