Here’s another interesting post from FT:
Never mind Big Tech — ‘little tech’ can be dangerous at work too | Financial Times
The EU has proposed regulations that would designate workplace AI products “high risk”, requiring providers to allow for human oversight. In the meantime, some gig workers are challenging automated decisions in the courts using the EU’s General Data Protection Regulation. Last year, for example, a court in Amsterdam ordered Ola Cabs to give drivers access to anonymised ratings on their performance, to personal data used to create their “fraud probability score” and to data used to create an earnings profile that influences work allocation.
There are numerous other examples cited in the write up, but I’d like to focus on this one- how automated systems (or specific secret algorithms) will be forced to “open up” to regulators on work-allocation. I am not sure what came of the judgement or how much the company had to provide access, but their valuation is exactly on the basis of keeping their algorithms “secret”. Work allocation depends on numerous factors-aggregation of demand, artificial scarcity, and modelling that accounts for populations behaviours, etc.
Hospital management (around patient flow) will then be subject to how patients are assigned to initial consultation – for example. It will become increasingly complex to justify employer-employee relationships and based on precarious trust issues. Autonomy by AI for “clinical decision support systems” will become increasingly “independent” as patients become comfortable with its use and hospitals understand the risks associated with the vicarious liability.
We are sitting on a powder keg of massive understated disruption. Be prepared to understand it.