AI and Ethics: Evolving ideas

I have been thinking about the ethical dimensions of AI/ML and its impact on healthcare. I have read a lot on the subject, but it is from the western perspective where the ideas are often rehashed with a lot of hand wringing. A couple of them in the mainstream media sound alarmist.

If you look at the entire process dispassionately and with a detached eye, you would realise that AI is good for automation repetitive tasks. The “insights” gleaned from the same difficult to realise because they garble the input data up. There is no magical algorithm that can straighten it up and give anything meaningful.

Here’s a blurb from HBR

As algorithmic decision-making’s role in calculating the distribution of limited resources increases, and as humans become more dependent on and vulnerable to the decisions of AI, anxieties about fairness are rising. How unbiased can an automated decision-making process with humans as the recipients really be?

If the decision-making processes include biases, it would automatically reflect on the assumptions of the machine learning algorithm too. Therefore, HBR is pushing for “interpretable AI”. Fairness in the decision-making process is welcome.

Here’s the conclusion:

Organizations need to recognize that their stakeholders will perceive them, not the algorithms they deploy, as responsible for any unfair outcomes that may emerge. What makes for AI fairness will then also be a function of how fair stakeholders perceive the company generally to be. Research has shown that fairness perceptions, in addition to distributive fairness that algorithms have mastered to some extent, may entail how fairly the organization treats its employees and customers, whether it communicates in transparent ways, and whether it is regarded as respectful toward the community at large. Organizations adopting AI to participate in decision-making are advised to put the necessary time and energy into building the right work culture: one with a trustworthy and fair organizational image.

I think the HBR write up is rather too optimistic and has attempted to create specific “goals”. Although, if you are utilising the AI for decision-making process, it is essential to understand the limitations of the model and then proceed with caution using intuition and common-sense.

Here’s something from the Singapore government:

Intended as a companion guide to the Model Framework, ISAGO aims to help organisations assess the alignment of their AI governance practices with the Model Framework. It also provides an extensive list of useful industry examples and practices to help organisations implement the Model Framework.

I was wondering- at what point do the directive principles become part of the policy? It would interest me to see Singaporean government negotiating its way through.

Ethics and fairness are principally human constructs. I believe that it would chase a rabbit hole if there is an attempt to include fairness quotient in algorithms. Would the model be sued for firing a person of colour if that individual slacks at work? There are time-honoured principles to address these issues, and it is a futile attempt to wade in this territory. Unless it looks good in the PR marketing material that corporations are concerned about “diversity”.