AI Fairness is just not an ethical issue- plan rationally on why the underlying assumptions are flawed

Are the algorithms “fair”? The authors quote the following incident from the UK:

The authority that administers A-Level college entrance exams in the UK, Ofqual, recently found itself mired in scandal. Unable to hold live exams because of Covid-19, it designed and employed an algorithm that based scores partly on the historical performance of the schools students attended. The outcry was immediate, as students who were already disadvantaged found themselves further penalized by artificially deflated scores, their efforts disregarded and their futures thrown into disarray.

The authors use this example to highlight the issues of bias. The try to identify the source of bias:

The first is bias in the data that systems learn from. You can see this play out for yourself: do a Google image search for “professional haircut,” and another for “unprofessional haircut.” “Professional haircut” turns up results that are exclusively white men, while “unprofessional haircut” has much more gender and racial diversity….A second source of bias occurs in the way algorithms are designed. Machine learning algorithms generally rely on correlation, rather than causal relationships.

I think it is a causal statement devoid of merit. Why should biases be the bedrock of the argument? I think the more pertinent point is the way the black boxes of algorithms work. Take for example, professional assessment. How can that be without a bias? There is always an element of trying to be objective but even if you put the algorithm to work, it would try to mimic the way humans are trying to solve the problem. The simplest solution- don’t do it. It would solve the problem of bias once and for all!

There is some solution but of course, it is NOT objective.

Unless there is concerted intervention, algorithms will continue to reflect and reinforce the prejudices that hold society and business back. For example, developers should no longer be able to simply insert open source code into systems without knowing what’s in it, and software firms should no longer be able to use the concept of “proprietary algorithms” to obfuscate scrutiny of shoddy code design.

The solution instead is to understand the limitations of AI and NOT give in to the surrounding hype. Leave the human jobs to humans. A robotic arm in the workflow won’t suffer from bias because there is no rationale for it. Or take, for example, the requirement for robots in hazardous lines. Even those areas which require precise titration of liquids. I don’t think that anyone has any reason to complain. The authors round off their arguments in improving the access to credit for those at risk. Fintech is a different beast that doesn’t fit in the pattern of discussion here. I’d avoid it. However, the bias assessments are flawed and require a careful re-look at what aspect of workflow we are looking at for automation. The question that needs to be addressed is- whether we really need to put in machine learning there.