There’s good research that has literally gone down the drain. It started with the hype around the autonomous cars (which is an aimless pursuit). Subsequently, they are scooping up personal data via voice commands.
Real-time transcription in regional languages is a significant advance. I remember reading up about Indic OS that offers a stellar opportunity for mobile devices to act as digital scribes. The potential for healthcare opens up manifold. However, Google, with its “next billion users” initiative, is killing the indigenous efforts.
Microsoft acknowledged that humans help review voice data generatedthrough its speech-recognition technology—in products including its Cortana assistant and Skype messaging app—which businesses such as BMW, HP Inc., and Humana are integrating into their own products and services. Chinese tech companies including marketplace Alibaba, search giant Baidu, and phone maker Xiaomi are churning out millions of smart speakers each quarter. Industry analysts say Google and Facebook Inc. are likewise betting audio data will greatly enhance their mammoth ad businesses. Internet browsing tells these companies a tremendous amount about people, but audio recordings could make it much easier for AI to approximate ages, genders, emotions, and even locations and interests
The question is: Does voice transcription and Siri help, in any way to assist physicians in digital transcriptions for medical records? On-premise hardware will be an expensive solution. If we have to get the cloud in the picture, it would be fraught with several legal and regulatory issues. Data localisation, for example, in India, will make it more onerous to deploy it widely because cloud infrastructure is expensive.
via Is Alexa Always Listening? How Amazon, Google, Apple Hear, Record – Bloomberg