Kate Crawford writing for Nature:
Lessons from clinical trials show why regulation is important. Federal requirements and subsequent advocacy have made many more clinical-trial data available to the public and subject to rigorous verification. This becomes the bedrock for better policymaking and public trust. Regulatory oversight of affective technologies would bring similar benefits and accountability. It could also help in establishing norms to counter over-reach by corporations and governments.
Yet companies continue to sell software that will affect people’s opportunities without clearly documented, independently audited evidence of effectiveness. Job applicants are being judged unfairly because their facial expressions or vocal tones don’t match those of employees; students are being flagged at school because their faces seem angry. Researchers have also shown that facial-recognition software interprets Black faces as having more negative emotions than white faces do.
Kate is partially right.
Keeping the trial data public? As a bedrock for “trust”? I can envisaghe pitchforks on Twitter once the data is made public by the pharma companies and they will realise that multi billion dollar valuations is basically vaporware. What will do then?
Kate is being incredibly naive without giving out the justifications for her suggestions.
AI is being pushed as a band-aid solution for everything under the sun. For example, the ed-tech space where companies are pouring in billions to offer interactive classrooms. Distraught parents have no idea on being able to filter out the chaff from the grain. Meanwhile, the readability scores have plummeted because parents are relying on the magic of AI to “suggest” something radical.
The lessons learned usually manifest themselves as experiences; it is impossible to judge the impact of the technology in a short span. Kate gives the example of a polygraph and it took SCOTUS to conclude that there is no consensus about polygraph evidence being reliable.
That’s why I am pushing for quantifiable metrics. Voice intonations/ Body Language/ Surveillance at workplace etc are fancy ideas but have no relevance in ML because they are not quantifiable.
Do we need regulation? Possibly yes. However, legislation is a socio-cultural issue and this kind of debates have been done to death in every major newspaper and increasingly journals.
Yet companies continue to sell software that will affect people’s opportunities without clearly documented, independently audited evidence of effectiveness. Job applicants are being judged unfairly because their facial expressions or vocal tones don’t match those of employees; students are being flagged at school because their faces seem angry. Researchers have also shown that facial-recognition software interprets Black faces as having more negative emotions than white faces d
ML isn’t doing the judgement calls- its the people relying on the historical data. Who’s going to scrub that? Kate should have provided a better context to “correct the historical wrongs” instead.